2026-02-14 02:16:33.301761 | Job console starting 2026-02-14 02:16:33.324129 | Updating git repos 2026-02-14 02:16:33.387400 | Cloning repos into workspace 2026-02-14 02:16:33.599902 | Restoring repo states 2026-02-14 02:16:33.623826 | Merging changes 2026-02-14 02:16:33.623848 | Checking out repos 2026-02-14 02:16:33.899842 | Preparing playbooks 2026-02-14 02:16:34.568415 | Running Ansible setup 2026-02-14 02:16:40.087739 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-14 02:16:40.882169 | 2026-02-14 02:16:40.882326 | PLAY [Base pre] 2026-02-14 02:16:40.899225 | 2026-02-14 02:16:40.899356 | TASK [Setup log path fact] 2026-02-14 02:16:40.930038 | orchestrator | ok 2026-02-14 02:16:40.947407 | 2026-02-14 02:16:40.947549 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-14 02:16:40.989036 | orchestrator | ok 2026-02-14 02:16:41.001194 | 2026-02-14 02:16:41.001317 | TASK [emit-job-header : Print job information] 2026-02-14 02:16:41.043541 | # Job Information 2026-02-14 02:16:41.043783 | Ansible Version: 2.16.14 2026-02-14 02:16:41.043824 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-14 02:16:41.043859 | Pipeline: periodic-midnight 2026-02-14 02:16:41.043882 | Executor: 521e9411259a 2026-02-14 02:16:41.043903 | Triggered by: https://github.com/osism/testbed 2026-02-14 02:16:41.043925 | Event ID: 7264ba07d94c4706858642450887c310 2026-02-14 02:16:41.051071 | 2026-02-14 02:16:41.051193 | LOOP [emit-job-header : Print node information] 2026-02-14 02:16:41.178797 | orchestrator | ok: 2026-02-14 02:16:41.179120 | orchestrator | # Node Information 2026-02-14 02:16:41.179181 | orchestrator | Inventory Hostname: orchestrator 2026-02-14 02:16:41.179225 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-14 02:16:41.179264 | orchestrator | Username: zuul-testbed03 2026-02-14 02:16:41.179301 | orchestrator | Distro: Debian 12.13 2026-02-14 02:16:41.179341 | orchestrator | Provider: static-testbed 2026-02-14 02:16:41.179378 | orchestrator | Region: 2026-02-14 02:16:41.179414 | orchestrator | Label: testbed-orchestrator 2026-02-14 02:16:41.179448 | orchestrator | Product Name: OpenStack Nova 2026-02-14 02:16:41.179482 | orchestrator | Interface IP: 81.163.193.140 2026-02-14 02:16:41.206907 | 2026-02-14 02:16:41.207078 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-14 02:16:41.711213 | orchestrator -> localhost | changed 2026-02-14 02:16:41.719623 | 2026-02-14 02:16:41.719797 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-14 02:16:42.758281 | orchestrator -> localhost | changed 2026-02-14 02:16:42.784143 | 2026-02-14 02:16:42.784284 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-14 02:16:43.084054 | orchestrator -> localhost | ok 2026-02-14 02:16:43.099452 | 2026-02-14 02:16:43.099622 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-14 02:16:43.138032 | orchestrator | ok 2026-02-14 02:16:43.158743 | orchestrator | included: /var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-14 02:16:43.167136 | 2026-02-14 02:16:43.167245 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-14 02:16:45.408960 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-14 02:16:45.409571 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/1a58906f9cdd43b884cb44b9013c953c_id_rsa 2026-02-14 02:16:45.409719 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/1a58906f9cdd43b884cb44b9013c953c_id_rsa.pub 2026-02-14 02:16:45.409803 | orchestrator -> localhost | The key fingerprint is: 2026-02-14 02:16:45.409876 | orchestrator -> localhost | SHA256:IC7P81YDIopAVtFvdKihI32O0SQ8RH456yW+hzfd/6c zuul-build-sshkey 2026-02-14 02:16:45.409942 | orchestrator -> localhost | The key's randomart image is: 2026-02-14 02:16:45.410030 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-14 02:16:45.410095 | orchestrator -> localhost | | ==o . | 2026-02-14 02:16:45.410159 | orchestrator -> localhost | | o.+ +.o . | 2026-02-14 02:16:45.410220 | orchestrator -> localhost | |o ..B+* . | 2026-02-14 02:16:45.410277 | orchestrator -> localhost | |...*o=++ | 2026-02-14 02:16:45.410337 | orchestrator -> localhost | |o.o.B+.oS | 2026-02-14 02:16:45.410406 | orchestrator -> localhost | |o =o.o o | 2026-02-14 02:16:45.410467 | orchestrator -> localhost | | +o..... | 2026-02-14 02:16:45.410527 | orchestrator -> localhost | | +o+ . . .| 2026-02-14 02:16:45.410590 | orchestrator -> localhost | | o+ . ..Eo.| 2026-02-14 02:16:45.410650 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-14 02:16:45.410896 | orchestrator -> localhost | ok: Runtime: 0:00:01.729720 2026-02-14 02:16:45.426127 | 2026-02-14 02:16:45.426280 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-14 02:16:45.461337 | orchestrator | ok 2026-02-14 02:16:45.474104 | orchestrator | included: /var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-14 02:16:45.483404 | 2026-02-14 02:16:45.483501 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-14 02:16:45.507775 | orchestrator | skipping: Conditional result was False 2026-02-14 02:16:45.515552 | 2026-02-14 02:16:45.515665 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-14 02:16:46.138087 | orchestrator | changed 2026-02-14 02:16:46.149903 | 2026-02-14 02:16:46.150057 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-14 02:16:46.422163 | orchestrator | ok 2026-02-14 02:16:46.430668 | 2026-02-14 02:16:46.430855 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-14 02:16:46.866097 | orchestrator | ok 2026-02-14 02:16:46.875058 | 2026-02-14 02:16:46.875199 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-14 02:16:47.331055 | orchestrator | ok 2026-02-14 02:16:47.339563 | 2026-02-14 02:16:47.339705 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-14 02:16:47.364980 | orchestrator | skipping: Conditional result was False 2026-02-14 02:16:47.374508 | 2026-02-14 02:16:47.374636 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-14 02:16:47.816345 | orchestrator -> localhost | changed 2026-02-14 02:16:47.835572 | 2026-02-14 02:16:47.835738 | TASK [add-build-sshkey : Add back temp key] 2026-02-14 02:16:48.186577 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/1a58906f9cdd43b884cb44b9013c953c_id_rsa (zuul-build-sshkey) 2026-02-14 02:16:48.187069 | orchestrator -> localhost | ok: Runtime: 0:00:00.019132 2026-02-14 02:16:48.203706 | 2026-02-14 02:16:48.203858 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-14 02:16:48.630548 | orchestrator | ok 2026-02-14 02:16:48.639345 | 2026-02-14 02:16:48.639476 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-14 02:16:48.674195 | orchestrator | skipping: Conditional result was False 2026-02-14 02:16:48.728475 | 2026-02-14 02:16:48.729271 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-14 02:16:49.125196 | orchestrator | ok 2026-02-14 02:16:49.140642 | 2026-02-14 02:16:49.140802 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-14 02:16:49.189838 | orchestrator | ok 2026-02-14 02:16:49.200503 | 2026-02-14 02:16:49.200659 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-14 02:16:49.508831 | orchestrator -> localhost | ok 2026-02-14 02:16:49.518786 | 2026-02-14 02:16:49.518956 | TASK [validate-host : Collect information about the host] 2026-02-14 02:16:50.655915 | orchestrator | ok 2026-02-14 02:16:50.677771 | 2026-02-14 02:16:50.677965 | TASK [validate-host : Sanitize hostname] 2026-02-14 02:16:50.752766 | orchestrator | ok 2026-02-14 02:16:50.762072 | 2026-02-14 02:16:50.762221 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-14 02:16:51.332631 | orchestrator -> localhost | changed 2026-02-14 02:16:51.339282 | 2026-02-14 02:16:51.339391 | TASK [validate-host : Collect information about zuul worker] 2026-02-14 02:16:51.772869 | orchestrator | ok 2026-02-14 02:16:51.781339 | 2026-02-14 02:16:51.781490 | TASK [validate-host : Write out all zuul information for each host] 2026-02-14 02:16:52.354076 | orchestrator -> localhost | changed 2026-02-14 02:16:52.365729 | 2026-02-14 02:16:52.365846 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-14 02:16:52.657627 | orchestrator | ok 2026-02-14 02:16:52.667929 | 2026-02-14 02:16:52.668124 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-14 02:17:18.399069 | orchestrator | changed: 2026-02-14 02:17:18.399312 | orchestrator | .d..t...... src/ 2026-02-14 02:17:18.399349 | orchestrator | .d..t...... src/github.com/ 2026-02-14 02:17:18.399376 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-14 02:17:18.399399 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-14 02:17:18.399420 | orchestrator | RedHat.yml 2026-02-14 02:17:18.414958 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-14 02:17:18.414976 | orchestrator | RedHat.yml 2026-02-14 02:17:18.415030 | orchestrator | = 1.53.0"... 2026-02-14 02:17:32.389926 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-14 02:17:32.410903 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-14 02:17:32.877800 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-14 02:17:33.755500 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-14 02:17:34.130873 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-14 02:17:34.815804 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-14 02:17:35.181378 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-14 02:17:35.844526 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-14 02:17:35.844587 | orchestrator | 2026-02-14 02:17:35.844593 | orchestrator | Providers are signed by their developers. 2026-02-14 02:17:35.844598 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-14 02:17:35.844610 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-14 02:17:35.844648 | orchestrator | 2026-02-14 02:17:35.844654 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-14 02:17:35.844658 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-14 02:17:35.844672 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-14 02:17:35.844684 | orchestrator | you run "tofu init" in the future. 2026-02-14 02:17:35.845053 | orchestrator | 2026-02-14 02:17:35.845093 | orchestrator | OpenTofu has been successfully initialized! 2026-02-14 02:17:35.845115 | orchestrator | 2026-02-14 02:17:35.845120 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-14 02:17:35.845125 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-14 02:17:35.845129 | orchestrator | should now work. 2026-02-14 02:17:35.845132 | orchestrator | 2026-02-14 02:17:35.845136 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-14 02:17:35.845140 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-14 02:17:35.845151 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-14 02:17:35.993528 | orchestrator | Created and switched to workspace "ci"! 2026-02-14 02:17:35.993564 | orchestrator | 2026-02-14 02:17:35.993570 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-14 02:17:35.993575 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-14 02:17:35.993593 | orchestrator | for this configuration. 2026-02-14 02:17:36.092651 | orchestrator | ci.auto.tfvars 2026-02-14 02:17:36.095692 | orchestrator | default_custom.tf 2026-02-14 02:17:36.918121 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-14 02:17:37.579581 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-14 02:17:37.878824 | orchestrator | 2026-02-14 02:17:37.878875 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-14 02:17:37.878882 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-14 02:17:37.878891 | orchestrator | + create 2026-02-14 02:17:37.878895 | orchestrator | <= read (data resources) 2026-02-14 02:17:37.878899 | orchestrator | 2026-02-14 02:17:37.878902 | orchestrator | OpenTofu will perform the following actions: 2026-02-14 02:17:37.878907 | orchestrator | 2026-02-14 02:17:37.878910 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-14 02:17:37.878914 | orchestrator | # (config refers to values not yet known) 2026-02-14 02:17:37.878917 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-14 02:17:37.878921 | orchestrator | + checksum = (known after apply) 2026-02-14 02:17:37.878924 | orchestrator | + created_at = (known after apply) 2026-02-14 02:17:37.878927 | orchestrator | + file = (known after apply) 2026-02-14 02:17:37.878930 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.878948 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.878952 | orchestrator | + min_disk_gb = (known after apply) 2026-02-14 02:17:37.878955 | orchestrator | + min_ram_mb = (known after apply) 2026-02-14 02:17:37.878958 | orchestrator | + most_recent = true 2026-02-14 02:17:37.878961 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.878964 | orchestrator | + protected = (known after apply) 2026-02-14 02:17:37.878967 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.878973 | orchestrator | + schema = (known after apply) 2026-02-14 02:17:37.878976 | orchestrator | + size_bytes = (known after apply) 2026-02-14 02:17:37.878979 | orchestrator | + tags = (known after apply) 2026-02-14 02:17:37.878982 | orchestrator | + updated_at = (known after apply) 2026-02-14 02:17:37.878985 | orchestrator | } 2026-02-14 02:17:37.878990 | orchestrator | 2026-02-14 02:17:37.878993 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-14 02:17:37.878996 | orchestrator | # (config refers to values not yet known) 2026-02-14 02:17:37.878999 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-14 02:17:37.879002 | orchestrator | + checksum = (known after apply) 2026-02-14 02:17:37.879006 | orchestrator | + created_at = (known after apply) 2026-02-14 02:17:37.879009 | orchestrator | + file = (known after apply) 2026-02-14 02:17:37.879012 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879015 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879018 | orchestrator | + min_disk_gb = (known after apply) 2026-02-14 02:17:37.879021 | orchestrator | + min_ram_mb = (known after apply) 2026-02-14 02:17:37.879024 | orchestrator | + most_recent = true 2026-02-14 02:17:37.879027 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.879030 | orchestrator | + protected = (known after apply) 2026-02-14 02:17:37.879033 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879036 | orchestrator | + schema = (known after apply) 2026-02-14 02:17:37.879040 | orchestrator | + size_bytes = (known after apply) 2026-02-14 02:17:37.879043 | orchestrator | + tags = (known after apply) 2026-02-14 02:17:37.879046 | orchestrator | + updated_at = (known after apply) 2026-02-14 02:17:37.879049 | orchestrator | } 2026-02-14 02:17:37.879073 | orchestrator | 2026-02-14 02:17:37.879077 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-14 02:17:37.879081 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-14 02:17:37.879084 | orchestrator | + content = (known after apply) 2026-02-14 02:17:37.879087 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-14 02:17:37.879090 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-14 02:17:37.879094 | orchestrator | + content_md5 = (known after apply) 2026-02-14 02:17:37.879097 | orchestrator | + content_sha1 = (known after apply) 2026-02-14 02:17:37.879100 | orchestrator | + content_sha256 = (known after apply) 2026-02-14 02:17:37.879103 | orchestrator | + content_sha512 = (known after apply) 2026-02-14 02:17:37.879106 | orchestrator | + directory_permission = "0777" 2026-02-14 02:17:37.879109 | orchestrator | + file_permission = "0644" 2026-02-14 02:17:37.879112 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-14 02:17:37.879116 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879119 | orchestrator | } 2026-02-14 02:17:37.879123 | orchestrator | 2026-02-14 02:17:37.879126 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-14 02:17:37.879130 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-14 02:17:37.879133 | orchestrator | + content = (known after apply) 2026-02-14 02:17:37.879136 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-14 02:17:37.879139 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-14 02:17:37.879142 | orchestrator | + content_md5 = (known after apply) 2026-02-14 02:17:37.879145 | orchestrator | + content_sha1 = (known after apply) 2026-02-14 02:17:37.879148 | orchestrator | + content_sha256 = (known after apply) 2026-02-14 02:17:37.879151 | orchestrator | + content_sha512 = (known after apply) 2026-02-14 02:17:37.879154 | orchestrator | + directory_permission = "0777" 2026-02-14 02:17:37.879158 | orchestrator | + file_permission = "0644" 2026-02-14 02:17:37.879165 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-14 02:17:37.879168 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879171 | orchestrator | } 2026-02-14 02:17:37.879175 | orchestrator | 2026-02-14 02:17:37.879184 | orchestrator | # local_file.inventory will be created 2026-02-14 02:17:37.879188 | orchestrator | + resource "local_file" "inventory" { 2026-02-14 02:17:37.879191 | orchestrator | + content = (known after apply) 2026-02-14 02:17:37.879194 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-14 02:17:37.879197 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-14 02:17:37.879200 | orchestrator | + content_md5 = (known after apply) 2026-02-14 02:17:37.879203 | orchestrator | + content_sha1 = (known after apply) 2026-02-14 02:17:37.879206 | orchestrator | + content_sha256 = (known after apply) 2026-02-14 02:17:37.879209 | orchestrator | + content_sha512 = (known after apply) 2026-02-14 02:17:37.879213 | orchestrator | + directory_permission = "0777" 2026-02-14 02:17:37.879216 | orchestrator | + file_permission = "0644" 2026-02-14 02:17:37.879219 | orchestrator | + filename = "inventory.ci" 2026-02-14 02:17:37.879222 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879225 | orchestrator | } 2026-02-14 02:17:37.879343 | orchestrator | 2026-02-14 02:17:37.879348 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-14 02:17:37.879351 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-14 02:17:37.879355 | orchestrator | + content = (sensitive value) 2026-02-14 02:17:37.879358 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-14 02:17:37.879361 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-14 02:17:37.879364 | orchestrator | + content_md5 = (known after apply) 2026-02-14 02:17:37.879367 | orchestrator | + content_sha1 = (known after apply) 2026-02-14 02:17:37.879370 | orchestrator | + content_sha256 = (known after apply) 2026-02-14 02:17:37.879373 | orchestrator | + content_sha512 = (known after apply) 2026-02-14 02:17:37.879377 | orchestrator | + directory_permission = "0700" 2026-02-14 02:17:37.879380 | orchestrator | + file_permission = "0600" 2026-02-14 02:17:37.879383 | orchestrator | + filename = ".id_rsa.ci" 2026-02-14 02:17:37.879386 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879390 | orchestrator | } 2026-02-14 02:17:37.879394 | orchestrator | 2026-02-14 02:17:37.879397 | orchestrator | # null_resource.node_semaphore will be created 2026-02-14 02:17:37.879400 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-14 02:17:37.879404 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879407 | orchestrator | } 2026-02-14 02:17:37.879410 | orchestrator | 2026-02-14 02:17:37.879413 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-14 02:17:37.879417 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-14 02:17:37.879420 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879423 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879426 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879429 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879432 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879436 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-14 02:17:37.879439 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879442 | orchestrator | + size = 80 2026-02-14 02:17:37.879445 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879448 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879451 | orchestrator | } 2026-02-14 02:17:37.879455 | orchestrator | 2026-02-14 02:17:37.879459 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-14 02:17:37.879462 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879465 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879468 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879471 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879478 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879481 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879484 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-14 02:17:37.879487 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879491 | orchestrator | + size = 80 2026-02-14 02:17:37.879494 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879497 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879500 | orchestrator | } 2026-02-14 02:17:37.879504 | orchestrator | 2026-02-14 02:17:37.879507 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-14 02:17:37.879511 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879514 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879517 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879520 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879523 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879526 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879529 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-14 02:17:37.879533 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879536 | orchestrator | + size = 80 2026-02-14 02:17:37.879539 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879542 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879545 | orchestrator | } 2026-02-14 02:17:37.879549 | orchestrator | 2026-02-14 02:17:37.879553 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-14 02:17:37.879556 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879559 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879563 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879566 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879569 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879572 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879575 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-14 02:17:37.879578 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879582 | orchestrator | + size = 80 2026-02-14 02:17:37.879585 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879588 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879592 | orchestrator | } 2026-02-14 02:17:37.879596 | orchestrator | 2026-02-14 02:17:37.879599 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-14 02:17:37.879602 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879605 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879609 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879612 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879615 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879619 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879624 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-14 02:17:37.879627 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879630 | orchestrator | + size = 80 2026-02-14 02:17:37.879634 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879637 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879640 | orchestrator | } 2026-02-14 02:17:37.879643 | orchestrator | 2026-02-14 02:17:37.879646 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-14 02:17:37.879649 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879653 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879656 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879659 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879666 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879670 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879673 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-14 02:17:37.879676 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879679 | orchestrator | + size = 80 2026-02-14 02:17:37.879682 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879686 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879689 | orchestrator | } 2026-02-14 02:17:37.879693 | orchestrator | 2026-02-14 02:17:37.879696 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-14 02:17:37.879699 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-14 02:17:37.879703 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879706 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879709 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879712 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.879715 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879718 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-14 02:17:37.879722 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879725 | orchestrator | + size = 80 2026-02-14 02:17:37.879728 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879731 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879735 | orchestrator | } 2026-02-14 02:17:37.879738 | orchestrator | 2026-02-14 02:17:37.879741 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-14 02:17:37.879744 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.879748 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879751 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879754 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879757 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879760 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-14 02:17:37.879764 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879767 | orchestrator | + size = 20 2026-02-14 02:17:37.879770 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879773 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879776 | orchestrator | } 2026-02-14 02:17:37.879781 | orchestrator | 2026-02-14 02:17:37.879784 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-14 02:17:37.879787 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.879791 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879794 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879797 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879800 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879803 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-14 02:17:37.879806 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879810 | orchestrator | + size = 20 2026-02-14 02:17:37.879813 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879816 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879819 | orchestrator | } 2026-02-14 02:17:37.879822 | orchestrator | 2026-02-14 02:17:37.879825 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-14 02:17:37.879829 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.879832 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879835 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879852 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879856 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879861 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-14 02:17:37.879866 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879876 | orchestrator | + size = 20 2026-02-14 02:17:37.879882 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879889 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879895 | orchestrator | } 2026-02-14 02:17:37.879901 | orchestrator | 2026-02-14 02:17:37.879906 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-14 02:17:37.879911 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.879916 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879921 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879927 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879932 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879937 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-14 02:17:37.879942 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.879947 | orchestrator | + size = 20 2026-02-14 02:17:37.879952 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.879957 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.879962 | orchestrator | } 2026-02-14 02:17:37.879970 | orchestrator | 2026-02-14 02:17:37.879975 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-14 02:17:37.879981 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.879986 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.879989 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.879992 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.879995 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.879999 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-14 02:17:37.880002 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880007 | orchestrator | + size = 20 2026-02-14 02:17:37.880011 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.880014 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.880017 | orchestrator | } 2026-02-14 02:17:37.880020 | orchestrator | 2026-02-14 02:17:37.880023 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-14 02:17:37.880026 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.880029 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.880032 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880036 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880039 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.880042 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-14 02:17:37.880045 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880048 | orchestrator | + size = 20 2026-02-14 02:17:37.880051 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.880054 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.880057 | orchestrator | } 2026-02-14 02:17:37.880062 | orchestrator | 2026-02-14 02:17:37.880065 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-14 02:17:37.880068 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.880071 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.880074 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880077 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880080 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.880083 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-14 02:17:37.880086 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880089 | orchestrator | + size = 20 2026-02-14 02:17:37.880093 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.880096 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.880099 | orchestrator | } 2026-02-14 02:17:37.880104 | orchestrator | 2026-02-14 02:17:37.880109 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-14 02:17:37.880116 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.880126 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.880131 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880136 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880141 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.880146 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-14 02:17:37.880152 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880157 | orchestrator | + size = 20 2026-02-14 02:17:37.880163 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.880169 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.880174 | orchestrator | } 2026-02-14 02:17:37.880179 | orchestrator | 2026-02-14 02:17:37.880184 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-14 02:17:37.880190 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-14 02:17:37.880195 | orchestrator | + attachment = (known after apply) 2026-02-14 02:17:37.880200 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880205 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880211 | orchestrator | + metadata = (known after apply) 2026-02-14 02:17:37.880215 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-14 02:17:37.880218 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880221 | orchestrator | + size = 20 2026-02-14 02:17:37.880225 | orchestrator | + volume_retype_policy = "never" 2026-02-14 02:17:37.880228 | orchestrator | + volume_type = "ssd" 2026-02-14 02:17:37.880231 | orchestrator | } 2026-02-14 02:17:37.880236 | orchestrator | 2026-02-14 02:17:37.880239 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-14 02:17:37.880242 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-14 02:17:37.880246 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.880249 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.880252 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.880255 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.880258 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880262 | orchestrator | + config_drive = true 2026-02-14 02:17:37.880265 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.880268 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.880271 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-14 02:17:37.880274 | orchestrator | + force_delete = false 2026-02-14 02:17:37.880277 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.880280 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880283 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.880286 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.880289 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.880293 | orchestrator | + name = "testbed-manager" 2026-02-14 02:17:37.880296 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.880300 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880303 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.880306 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.880309 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.880312 | orchestrator | + user_data = (sensitive value) 2026-02-14 02:17:37.880315 | orchestrator | 2026-02-14 02:17:37.880319 | orchestrator | + block_device { 2026-02-14 02:17:37.880322 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.880325 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.880330 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.880335 | orchestrator | + multiattach = false 2026-02-14 02:17:37.880340 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.880348 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880360 | orchestrator | } 2026-02-14 02:17:37.880421 | orchestrator | 2026-02-14 02:17:37.880428 | orchestrator | + network { 2026-02-14 02:17:37.880433 | orchestrator | + access_network = false 2026-02-14 02:17:37.880439 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.880445 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.880448 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.880451 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.880455 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.880458 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880461 | orchestrator | } 2026-02-14 02:17:37.880464 | orchestrator | } 2026-02-14 02:17:37.880517 | orchestrator | 2026-02-14 02:17:37.880524 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-14 02:17:37.880529 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.880537 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.880542 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.880548 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.880554 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.880559 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880564 | orchestrator | + config_drive = true 2026-02-14 02:17:37.880570 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.880576 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.880581 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.880587 | orchestrator | + force_delete = false 2026-02-14 02:17:37.880592 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.880598 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880604 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.880609 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.880615 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.880620 | orchestrator | + name = "testbed-node-0" 2026-02-14 02:17:37.880625 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.880628 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880632 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.880635 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.880638 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.880641 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.880644 | orchestrator | 2026-02-14 02:17:37.880647 | orchestrator | + block_device { 2026-02-14 02:17:37.880651 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.880654 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.880657 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.880660 | orchestrator | + multiattach = false 2026-02-14 02:17:37.880663 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.880666 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880669 | orchestrator | } 2026-02-14 02:17:37.880672 | orchestrator | 2026-02-14 02:17:37.880675 | orchestrator | + network { 2026-02-14 02:17:37.880678 | orchestrator | + access_network = false 2026-02-14 02:17:37.880681 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.880685 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.880688 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.880691 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.880694 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.880697 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880700 | orchestrator | } 2026-02-14 02:17:37.880704 | orchestrator | } 2026-02-14 02:17:37.880709 | orchestrator | 2026-02-14 02:17:37.880712 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-14 02:17:37.880715 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.880719 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.880726 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.880729 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.880733 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.880736 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.880739 | orchestrator | + config_drive = true 2026-02-14 02:17:37.880742 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.880745 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.880748 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.880752 | orchestrator | + force_delete = false 2026-02-14 02:17:37.880755 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.880758 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.880761 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.880764 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.880768 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.880771 | orchestrator | + name = "testbed-node-1" 2026-02-14 02:17:37.880774 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.880777 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.880781 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.880786 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.880791 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.880797 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.880803 | orchestrator | 2026-02-14 02:17:37.880808 | orchestrator | + block_device { 2026-02-14 02:17:37.880814 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.880819 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.880824 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.880830 | orchestrator | + multiattach = false 2026-02-14 02:17:37.880835 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.880852 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880858 | orchestrator | } 2026-02-14 02:17:37.880863 | orchestrator | 2026-02-14 02:17:37.880869 | orchestrator | + network { 2026-02-14 02:17:37.880874 | orchestrator | + access_network = false 2026-02-14 02:17:37.880877 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.880880 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.880883 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.880886 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.880889 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.880893 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.880896 | orchestrator | } 2026-02-14 02:17:37.880901 | orchestrator | } 2026-02-14 02:17:37.880967 | orchestrator | 2026-02-14 02:17:37.880976 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-14 02:17:37.880979 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.880982 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.880985 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.880989 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.880992 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.880999 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.881002 | orchestrator | + config_drive = true 2026-02-14 02:17:37.881008 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.881014 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.881022 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.881027 | orchestrator | + force_delete = false 2026-02-14 02:17:37.881032 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.881038 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881043 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.881053 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.881058 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.881063 | orchestrator | + name = "testbed-node-2" 2026-02-14 02:17:37.881068 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.881074 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881079 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.881085 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.881090 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.881095 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.881100 | orchestrator | 2026-02-14 02:17:37.881106 | orchestrator | + block_device { 2026-02-14 02:17:37.881111 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.881117 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.881122 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.881128 | orchestrator | + multiattach = false 2026-02-14 02:17:37.881133 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.881138 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881144 | orchestrator | } 2026-02-14 02:17:37.881149 | orchestrator | 2026-02-14 02:17:37.881154 | orchestrator | + network { 2026-02-14 02:17:37.881160 | orchestrator | + access_network = false 2026-02-14 02:17:37.881165 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.881171 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.881175 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.881178 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.881181 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.881184 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881187 | orchestrator | } 2026-02-14 02:17:37.881190 | orchestrator | } 2026-02-14 02:17:37.881197 | orchestrator | 2026-02-14 02:17:37.881200 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-14 02:17:37.881203 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.881206 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.881209 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.881213 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.881216 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.881219 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.881222 | orchestrator | + config_drive = true 2026-02-14 02:17:37.881225 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.881228 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.881231 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.881235 | orchestrator | + force_delete = false 2026-02-14 02:17:37.881239 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.881244 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881248 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.881253 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.881258 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.881263 | orchestrator | + name = "testbed-node-3" 2026-02-14 02:17:37.881268 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.881273 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881277 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.881282 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.881288 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.881293 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.881298 | orchestrator | 2026-02-14 02:17:37.881303 | orchestrator | + block_device { 2026-02-14 02:17:37.881317 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.881323 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.881328 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.881338 | orchestrator | + multiattach = false 2026-02-14 02:17:37.881343 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.881349 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881353 | orchestrator | } 2026-02-14 02:17:37.881356 | orchestrator | 2026-02-14 02:17:37.881359 | orchestrator | + network { 2026-02-14 02:17:37.881362 | orchestrator | + access_network = false 2026-02-14 02:17:37.881365 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.881368 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.881371 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.881374 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.881377 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.881380 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881383 | orchestrator | } 2026-02-14 02:17:37.881386 | orchestrator | } 2026-02-14 02:17:37.881392 | orchestrator | 2026-02-14 02:17:37.881395 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-14 02:17:37.881398 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.881402 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.881405 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.881408 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.881411 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.881414 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.881418 | orchestrator | + config_drive = true 2026-02-14 02:17:37.881423 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.881428 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.881434 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.881439 | orchestrator | + force_delete = false 2026-02-14 02:17:37.881444 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.881450 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881455 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.881461 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.881466 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.881470 | orchestrator | + name = "testbed-node-4" 2026-02-14 02:17:37.881474 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.881477 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881480 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.881483 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.881486 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.881489 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.881492 | orchestrator | 2026-02-14 02:17:37.881495 | orchestrator | + block_device { 2026-02-14 02:17:37.881498 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.881502 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.881507 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.881512 | orchestrator | + multiattach = false 2026-02-14 02:17:37.881517 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.881522 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881526 | orchestrator | } 2026-02-14 02:17:37.881529 | orchestrator | 2026-02-14 02:17:37.881532 | orchestrator | + network { 2026-02-14 02:17:37.881536 | orchestrator | + access_network = false 2026-02-14 02:17:37.881539 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.881542 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.881545 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.881548 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.881551 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.881554 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881557 | orchestrator | } 2026-02-14 02:17:37.881560 | orchestrator | } 2026-02-14 02:17:37.881647 | orchestrator | 2026-02-14 02:17:37.881653 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-14 02:17:37.881656 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-14 02:17:37.881659 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-14 02:17:37.881662 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-14 02:17:37.881665 | orchestrator | + all_metadata = (known after apply) 2026-02-14 02:17:37.881668 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.881672 | orchestrator | + availability_zone = "nova" 2026-02-14 02:17:37.881675 | orchestrator | + config_drive = true 2026-02-14 02:17:37.881678 | orchestrator | + created = (known after apply) 2026-02-14 02:17:37.881681 | orchestrator | + flavor_id = (known after apply) 2026-02-14 02:17:37.881684 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-14 02:17:37.881687 | orchestrator | + force_delete = false 2026-02-14 02:17:37.881694 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-14 02:17:37.881698 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881701 | orchestrator | + image_id = (known after apply) 2026-02-14 02:17:37.881704 | orchestrator | + image_name = (known after apply) 2026-02-14 02:17:37.881707 | orchestrator | + key_pair = "testbed" 2026-02-14 02:17:37.881710 | orchestrator | + name = "testbed-node-5" 2026-02-14 02:17:37.881713 | orchestrator | + power_state = "active" 2026-02-14 02:17:37.881716 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881719 | orchestrator | + security_groups = (known after apply) 2026-02-14 02:17:37.881722 | orchestrator | + stop_before_destroy = false 2026-02-14 02:17:37.881726 | orchestrator | + updated = (known after apply) 2026-02-14 02:17:37.881729 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-14 02:17:37.881732 | orchestrator | 2026-02-14 02:17:37.881735 | orchestrator | + block_device { 2026-02-14 02:17:37.881738 | orchestrator | + boot_index = 0 2026-02-14 02:17:37.881741 | orchestrator | + delete_on_termination = false 2026-02-14 02:17:37.881744 | orchestrator | + destination_type = "volume" 2026-02-14 02:17:37.881747 | orchestrator | + multiattach = false 2026-02-14 02:17:37.881750 | orchestrator | + source_type = "volume" 2026-02-14 02:17:37.881753 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881756 | orchestrator | } 2026-02-14 02:17:37.881759 | orchestrator | 2026-02-14 02:17:37.881763 | orchestrator | + network { 2026-02-14 02:17:37.881766 | orchestrator | + access_network = false 2026-02-14 02:17:37.881769 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-14 02:17:37.881772 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-14 02:17:37.881775 | orchestrator | + mac = (known after apply) 2026-02-14 02:17:37.881778 | orchestrator | + name = (known after apply) 2026-02-14 02:17:37.881781 | orchestrator | + port = (known after apply) 2026-02-14 02:17:37.881784 | orchestrator | + uuid = (known after apply) 2026-02-14 02:17:37.881787 | orchestrator | } 2026-02-14 02:17:37.881791 | orchestrator | } 2026-02-14 02:17:37.881795 | orchestrator | 2026-02-14 02:17:37.881798 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-14 02:17:37.881801 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-14 02:17:37.881805 | orchestrator | + fingerprint = (known after apply) 2026-02-14 02:17:37.881808 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881811 | orchestrator | + name = "testbed" 2026-02-14 02:17:37.881814 | orchestrator | + private_key = (sensitive value) 2026-02-14 02:17:37.881817 | orchestrator | + public_key = (known after apply) 2026-02-14 02:17:37.881820 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881823 | orchestrator | + user_id = (known after apply) 2026-02-14 02:17:37.881826 | orchestrator | } 2026-02-14 02:17:37.881830 | orchestrator | 2026-02-14 02:17:37.881833 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-14 02:17:37.881836 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.881869 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.881873 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881876 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.881879 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881882 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.881885 | orchestrator | } 2026-02-14 02:17:37.881889 | orchestrator | 2026-02-14 02:17:37.881892 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-14 02:17:37.881895 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.881898 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.881901 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881904 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.881907 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881910 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.881914 | orchestrator | } 2026-02-14 02:17:37.881917 | orchestrator | 2026-02-14 02:17:37.881920 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-14 02:17:37.881923 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.881926 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.881929 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881932 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.881935 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881939 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.881942 | orchestrator | } 2026-02-14 02:17:37.881945 | orchestrator | 2026-02-14 02:17:37.881948 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-14 02:17:37.881951 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.881954 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.881958 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881961 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.881964 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881967 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.881970 | orchestrator | } 2026-02-14 02:17:37.881973 | orchestrator | 2026-02-14 02:17:37.881976 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-14 02:17:37.881980 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.881983 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.881986 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.881989 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.881995 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.881998 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.882001 | orchestrator | } 2026-02-14 02:17:37.882006 | orchestrator | 2026-02-14 02:17:37.882009 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-14 02:17:37.882027 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.882032 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.882035 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882038 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.882041 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882044 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.882048 | orchestrator | } 2026-02-14 02:17:37.882051 | orchestrator | 2026-02-14 02:17:37.882054 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-14 02:17:37.882057 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.882060 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.882063 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882066 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.882070 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882075 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.882079 | orchestrator | } 2026-02-14 02:17:37.882082 | orchestrator | 2026-02-14 02:17:37.882085 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-14 02:17:37.882088 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.882091 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.882094 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882097 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.882101 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882104 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.882107 | orchestrator | } 2026-02-14 02:17:37.882110 | orchestrator | 2026-02-14 02:17:37.882113 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-14 02:17:37.882116 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-14 02:17:37.882120 | orchestrator | + device = (known after apply) 2026-02-14 02:17:37.882123 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882126 | orchestrator | + instance_id = (known after apply) 2026-02-14 02:17:37.882129 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882134 | orchestrator | + volume_id = (known after apply) 2026-02-14 02:17:37.882139 | orchestrator | } 2026-02-14 02:17:37.882145 | orchestrator | 2026-02-14 02:17:37.882151 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-14 02:17:37.882155 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-14 02:17:37.882158 | orchestrator | + fixed_ip = (known after apply) 2026-02-14 02:17:37.882161 | orchestrator | + floating_ip = (known after apply) 2026-02-14 02:17:37.882164 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882167 | orchestrator | + port_id = (known after apply) 2026-02-14 02:17:37.882170 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882173 | orchestrator | } 2026-02-14 02:17:37.882178 | orchestrator | 2026-02-14 02:17:37.882182 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-14 02:17:37.882185 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-14 02:17:37.882188 | orchestrator | + address = (known after apply) 2026-02-14 02:17:37.882191 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882194 | orchestrator | + dns_domain = (known after apply) 2026-02-14 02:17:37.882197 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882203 | orchestrator | + fixed_ip = (known after apply) 2026-02-14 02:17:37.882208 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882213 | orchestrator | + pool = "public" 2026-02-14 02:17:37.882218 | orchestrator | + port_id = (known after apply) 2026-02-14 02:17:37.882223 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882227 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.882232 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882238 | orchestrator | } 2026-02-14 02:17:37.882243 | orchestrator | 2026-02-14 02:17:37.882248 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-14 02:17:37.882253 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-14 02:17:37.882258 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882263 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882268 | orchestrator | + availability_zone_hints = [ 2026-02-14 02:17:37.882273 | orchestrator | + "nova", 2026-02-14 02:17:37.882278 | orchestrator | ] 2026-02-14 02:17:37.882283 | orchestrator | + dns_domain = (known after apply) 2026-02-14 02:17:37.882288 | orchestrator | + external = (known after apply) 2026-02-14 02:17:37.882294 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882299 | orchestrator | + mtu = (known after apply) 2026-02-14 02:17:37.882304 | orchestrator | + name = "net-testbed-management" 2026-02-14 02:17:37.882309 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882319 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882325 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882330 | orchestrator | + shared = (known after apply) 2026-02-14 02:17:37.882336 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882342 | orchestrator | + transparent_vlan = (known after apply) 2026-02-14 02:17:37.882347 | orchestrator | 2026-02-14 02:17:37.882353 | orchestrator | + segments (known after apply) 2026-02-14 02:17:37.882359 | orchestrator | } 2026-02-14 02:17:37.882365 | orchestrator | 2026-02-14 02:17:37.882368 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-14 02:17:37.882372 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-14 02:17:37.882375 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882378 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.882381 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.882388 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882391 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.882394 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.882397 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.882401 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882404 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882407 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.882410 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.882413 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882416 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882419 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882422 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.882425 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882428 | orchestrator | 2026-02-14 02:17:37.882431 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882435 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.882438 | orchestrator | } 2026-02-14 02:17:37.882441 | orchestrator | 2026-02-14 02:17:37.882444 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.882447 | orchestrator | 2026-02-14 02:17:37.882450 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.882453 | orchestrator | + ip_address = "192.168.16.5" 2026-02-14 02:17:37.882456 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.882460 | orchestrator | } 2026-02-14 02:17:37.882463 | orchestrator | } 2026-02-14 02:17:37.882466 | orchestrator | 2026-02-14 02:17:37.882469 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-14 02:17:37.882472 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.882475 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882478 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.882481 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.882484 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882487 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.882491 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.882494 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.882497 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882500 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882503 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.882506 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.882509 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882512 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882515 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882521 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.882524 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882527 | orchestrator | 2026-02-14 02:17:37.882531 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882534 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.882537 | orchestrator | } 2026-02-14 02:17:37.882540 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882543 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.882546 | orchestrator | } 2026-02-14 02:17:37.882549 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882552 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.882555 | orchestrator | } 2026-02-14 02:17:37.882559 | orchestrator | 2026-02-14 02:17:37.882562 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.882565 | orchestrator | 2026-02-14 02:17:37.882568 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.882571 | orchestrator | + ip_address = "192.168.16.10" 2026-02-14 02:17:37.882574 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.882577 | orchestrator | } 2026-02-14 02:17:37.882580 | orchestrator | } 2026-02-14 02:17:37.882584 | orchestrator | 2026-02-14 02:17:37.882588 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-14 02:17:37.882591 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.882594 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882597 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.882600 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.882603 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882606 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.882609 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.882612 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.882616 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882619 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882622 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.882625 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.882628 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882631 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882634 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882637 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.882641 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882644 | orchestrator | 2026-02-14 02:17:37.882647 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882650 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.882653 | orchestrator | } 2026-02-14 02:17:37.882657 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882660 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.882663 | orchestrator | } 2026-02-14 02:17:37.882666 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882669 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.882672 | orchestrator | } 2026-02-14 02:17:37.882675 | orchestrator | 2026-02-14 02:17:37.882678 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.882681 | orchestrator | 2026-02-14 02:17:37.882684 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.882687 | orchestrator | + ip_address = "192.168.16.11" 2026-02-14 02:17:37.882690 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.882693 | orchestrator | } 2026-02-14 02:17:37.882697 | orchestrator | } 2026-02-14 02:17:37.882700 | orchestrator | 2026-02-14 02:17:37.882703 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-14 02:17:37.882706 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.882709 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882712 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.882716 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.882719 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882724 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.882727 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.882730 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.882733 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882739 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882742 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.882745 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.882748 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882751 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882754 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882759 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.882767 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882774 | orchestrator | 2026-02-14 02:17:37.882778 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882783 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.882788 | orchestrator | } 2026-02-14 02:17:37.882793 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882799 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.882804 | orchestrator | } 2026-02-14 02:17:37.882809 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882813 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.882817 | orchestrator | } 2026-02-14 02:17:37.882820 | orchestrator | 2026-02-14 02:17:37.882823 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.882826 | orchestrator | 2026-02-14 02:17:37.882829 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.882832 | orchestrator | + ip_address = "192.168.16.12" 2026-02-14 02:17:37.882836 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.882868 | orchestrator | } 2026-02-14 02:17:37.882875 | orchestrator | } 2026-02-14 02:17:37.882882 | orchestrator | 2026-02-14 02:17:37.882887 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-14 02:17:37.882892 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.882897 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.882902 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.882908 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.882913 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.882918 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.882923 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.882928 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.882931 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.882935 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.882938 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.882941 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.882944 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.882947 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.882950 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.882953 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.882957 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.882960 | orchestrator | 2026-02-14 02:17:37.882963 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882966 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.882969 | orchestrator | } 2026-02-14 02:17:37.882972 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882975 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.882978 | orchestrator | } 2026-02-14 02:17:37.882982 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.882985 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.882988 | orchestrator | } 2026-02-14 02:17:37.882991 | orchestrator | 2026-02-14 02:17:37.882998 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.883001 | orchestrator | 2026-02-14 02:17:37.883004 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.883007 | orchestrator | + ip_address = "192.168.16.13" 2026-02-14 02:17:37.883010 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.883013 | orchestrator | } 2026-02-14 02:17:37.883016 | orchestrator | } 2026-02-14 02:17:37.883021 | orchestrator | 2026-02-14 02:17:37.883024 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-14 02:17:37.883027 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.883030 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.883033 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.883036 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.883039 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.883042 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.883046 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.883049 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.883052 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.883055 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883058 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.883061 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.883064 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.883067 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.883070 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883073 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.883077 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883081 | orchestrator | 2026-02-14 02:17:37.883084 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883087 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.883090 | orchestrator | } 2026-02-14 02:17:37.883094 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883097 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.883100 | orchestrator | } 2026-02-14 02:17:37.883103 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883106 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.883110 | orchestrator | } 2026-02-14 02:17:37.883113 | orchestrator | 2026-02-14 02:17:37.883116 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.883119 | orchestrator | 2026-02-14 02:17:37.883122 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.883125 | orchestrator | + ip_address = "192.168.16.14" 2026-02-14 02:17:37.883128 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.883131 | orchestrator | } 2026-02-14 02:17:37.883135 | orchestrator | } 2026-02-14 02:17:37.883138 | orchestrator | 2026-02-14 02:17:37.883141 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-14 02:17:37.883144 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-14 02:17:37.883147 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.883151 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-14 02:17:37.883154 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-14 02:17:37.883157 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.883160 | orchestrator | + device_id = (known after apply) 2026-02-14 02:17:37.883163 | orchestrator | + device_owner = (known after apply) 2026-02-14 02:17:37.883166 | orchestrator | + dns_assignment = (known after apply) 2026-02-14 02:17:37.883169 | orchestrator | + dns_name = (known after apply) 2026-02-14 02:17:37.883172 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883176 | orchestrator | + mac_address = (known after apply) 2026-02-14 02:17:37.883179 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.883182 | orchestrator | + port_security_enabled = (known after apply) 2026-02-14 02:17:37.883185 | orchestrator | + qos_policy_id = (known after apply) 2026-02-14 02:17:37.883191 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883194 | orchestrator | + security_group_ids = (known after apply) 2026-02-14 02:17:37.883197 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883200 | orchestrator | 2026-02-14 02:17:37.883203 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883206 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-14 02:17:37.883209 | orchestrator | } 2026-02-14 02:17:37.883212 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883216 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-14 02:17:37.883219 | orchestrator | } 2026-02-14 02:17:37.883222 | orchestrator | + allowed_address_pairs { 2026-02-14 02:17:37.883225 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-14 02:17:37.883228 | orchestrator | } 2026-02-14 02:17:37.883231 | orchestrator | 2026-02-14 02:17:37.883237 | orchestrator | + binding (known after apply) 2026-02-14 02:17:37.883241 | orchestrator | 2026-02-14 02:17:37.883244 | orchestrator | + fixed_ip { 2026-02-14 02:17:37.883247 | orchestrator | + ip_address = "192.168.16.15" 2026-02-14 02:17:37.883250 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.883253 | orchestrator | } 2026-02-14 02:17:37.883256 | orchestrator | } 2026-02-14 02:17:37.883261 | orchestrator | 2026-02-14 02:17:37.883264 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-14 02:17:37.883267 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-14 02:17:37.883270 | orchestrator | + force_destroy = false 2026-02-14 02:17:37.883273 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883277 | orchestrator | + port_id = (known after apply) 2026-02-14 02:17:37.883280 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883283 | orchestrator | + router_id = (known after apply) 2026-02-14 02:17:37.883286 | orchestrator | + subnet_id = (known after apply) 2026-02-14 02:17:37.883289 | orchestrator | } 2026-02-14 02:17:37.883292 | orchestrator | 2026-02-14 02:17:37.883296 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-14 02:17:37.883299 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-14 02:17:37.883302 | orchestrator | + admin_state_up = (known after apply) 2026-02-14 02:17:37.883305 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.883308 | orchestrator | + availability_zone_hints = [ 2026-02-14 02:17:37.883311 | orchestrator | + "nova", 2026-02-14 02:17:37.883314 | orchestrator | ] 2026-02-14 02:17:37.883318 | orchestrator | + distributed = (known after apply) 2026-02-14 02:17:37.883321 | orchestrator | + enable_snat = (known after apply) 2026-02-14 02:17:37.883324 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-14 02:17:37.883327 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-14 02:17:37.883330 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883333 | orchestrator | + name = "testbed" 2026-02-14 02:17:37.883336 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883339 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883342 | orchestrator | 2026-02-14 02:17:37.883346 | orchestrator | + external_fixed_ip (known after apply) 2026-02-14 02:17:37.883349 | orchestrator | } 2026-02-14 02:17:37.883352 | orchestrator | 2026-02-14 02:17:37.883355 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-14 02:17:37.883359 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-14 02:17:37.883362 | orchestrator | + description = "ssh" 2026-02-14 02:17:37.883365 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883368 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883371 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883374 | orchestrator | + port_range_max = 22 2026-02-14 02:17:37.883378 | orchestrator | + port_range_min = 22 2026-02-14 02:17:37.883381 | orchestrator | + protocol = "tcp" 2026-02-14 02:17:37.883384 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883392 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883395 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883398 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883402 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883405 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883408 | orchestrator | } 2026-02-14 02:17:37.883412 | orchestrator | 2026-02-14 02:17:37.883415 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-14 02:17:37.883418 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-14 02:17:37.883422 | orchestrator | + description = "wireguard" 2026-02-14 02:17:37.883425 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883428 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883431 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883434 | orchestrator | + port_range_max = 51820 2026-02-14 02:17:37.883437 | orchestrator | + port_range_min = 51820 2026-02-14 02:17:37.883440 | orchestrator | + protocol = "udp" 2026-02-14 02:17:37.883443 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883447 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883450 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883453 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883456 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883459 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883462 | orchestrator | } 2026-02-14 02:17:37.883465 | orchestrator | 2026-02-14 02:17:37.883469 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-14 02:17:37.883472 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-14 02:17:37.883475 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883478 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883481 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883484 | orchestrator | + protocol = "tcp" 2026-02-14 02:17:37.883487 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883490 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883493 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883496 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-14 02:17:37.883500 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883503 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883506 | orchestrator | } 2026-02-14 02:17:37.883509 | orchestrator | 2026-02-14 02:17:37.883512 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-14 02:17:37.883517 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-14 02:17:37.883522 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883530 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883535 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883540 | orchestrator | + protocol = "udp" 2026-02-14 02:17:37.883546 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883550 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883556 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883561 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-14 02:17:37.883566 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883571 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883575 | orchestrator | } 2026-02-14 02:17:37.883581 | orchestrator | 2026-02-14 02:17:37.883586 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-14 02:17:37.883595 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-14 02:17:37.883600 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883605 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883609 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883614 | orchestrator | + protocol = "icmp" 2026-02-14 02:17:37.883618 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883623 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883628 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883633 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883639 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883644 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883650 | orchestrator | } 2026-02-14 02:17:37.883655 | orchestrator | 2026-02-14 02:17:37.883660 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-14 02:17:37.883665 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-14 02:17:37.883670 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883675 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883680 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883685 | orchestrator | + protocol = "tcp" 2026-02-14 02:17:37.883690 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883696 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883704 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883710 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883716 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883721 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883726 | orchestrator | } 2026-02-14 02:17:37.883734 | orchestrator | 2026-02-14 02:17:37.883739 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-14 02:17:37.883746 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-14 02:17:37.883749 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883753 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883756 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883759 | orchestrator | + protocol = "udp" 2026-02-14 02:17:37.883762 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883765 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883768 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883772 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883775 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883778 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883781 | orchestrator | } 2026-02-14 02:17:37.883784 | orchestrator | 2026-02-14 02:17:37.883787 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-14 02:17:37.883790 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-14 02:17:37.883793 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883798 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883801 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883805 | orchestrator | + protocol = "icmp" 2026-02-14 02:17:37.883808 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883811 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883814 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883817 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883820 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883823 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883829 | orchestrator | } 2026-02-14 02:17:37.883833 | orchestrator | 2026-02-14 02:17:37.883836 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-14 02:17:37.883849 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-14 02:17:37.883853 | orchestrator | + description = "vrrp" 2026-02-14 02:17:37.883856 | orchestrator | + direction = "ingress" 2026-02-14 02:17:37.883859 | orchestrator | + ethertype = "IPv4" 2026-02-14 02:17:37.883862 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.883865 | orchestrator | + protocol = "112" 2026-02-14 02:17:37.883869 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.883872 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-14 02:17:37.883877 | orchestrator | + remote_group_id = (known after apply) 2026-02-14 02:17:37.883886 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-14 02:17:37.883891 | orchestrator | + security_group_id = (known after apply) 2026-02-14 02:17:37.883896 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.883901 | orchestrator | } 2026-02-14 02:17:37.884054 | orchestrator | 2026-02-14 02:17:37.884061 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-14 02:17:37.884064 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-14 02:17:37.884067 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.884071 | orchestrator | + description = "management security group" 2026-02-14 02:17:37.884074 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.884077 | orchestrator | + name = "testbed-management" 2026-02-14 02:17:37.884080 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.884083 | orchestrator | + stateful = (known after apply) 2026-02-14 02:17:37.884086 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.884089 | orchestrator | } 2026-02-14 02:17:37.884094 | orchestrator | 2026-02-14 02:17:37.884097 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-14 02:17:37.884100 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-14 02:17:37.884104 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.884107 | orchestrator | + description = "node security group" 2026-02-14 02:17:37.884110 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.884113 | orchestrator | + name = "testbed-node" 2026-02-14 02:17:37.884116 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.884119 | orchestrator | + stateful = (known after apply) 2026-02-14 02:17:37.884122 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.884126 | orchestrator | } 2026-02-14 02:17:37.884203 | orchestrator | 2026-02-14 02:17:37.884207 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-14 02:17:37.884210 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-14 02:17:37.884213 | orchestrator | + all_tags = (known after apply) 2026-02-14 02:17:37.884216 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-14 02:17:37.884219 | orchestrator | + dns_nameservers = [ 2026-02-14 02:17:37.884222 | orchestrator | + "8.8.8.8", 2026-02-14 02:17:37.884226 | orchestrator | + "9.9.9.9", 2026-02-14 02:17:37.884229 | orchestrator | ] 2026-02-14 02:17:37.884232 | orchestrator | + enable_dhcp = true 2026-02-14 02:17:37.884235 | orchestrator | + gateway_ip = (known after apply) 2026-02-14 02:17:37.884238 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.884241 | orchestrator | + ip_version = 4 2026-02-14 02:17:37.884245 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-14 02:17:37.884248 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-14 02:17:37.884251 | orchestrator | + name = "subnet-testbed-management" 2026-02-14 02:17:37.884254 | orchestrator | + network_id = (known after apply) 2026-02-14 02:17:37.884257 | orchestrator | + no_gateway = false 2026-02-14 02:17:37.884260 | orchestrator | + region = (known after apply) 2026-02-14 02:17:37.884263 | orchestrator | + service_types = (known after apply) 2026-02-14 02:17:37.884269 | orchestrator | + tenant_id = (known after apply) 2026-02-14 02:17:37.884273 | orchestrator | 2026-02-14 02:17:37.884276 | orchestrator | + allocation_pool { 2026-02-14 02:17:37.884279 | orchestrator | + end = "192.168.31.250" 2026-02-14 02:17:37.884282 | orchestrator | + start = "192.168.31.200" 2026-02-14 02:17:37.884285 | orchestrator | } 2026-02-14 02:17:37.884288 | orchestrator | } 2026-02-14 02:17:37.884292 | orchestrator | 2026-02-14 02:17:37.884296 | orchestrator | # terraform_data.image will be created 2026-02-14 02:17:37.884299 | orchestrator | + resource "terraform_data" "image" { 2026-02-14 02:17:37.884302 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.884305 | orchestrator | + input = "Ubuntu 24.04" 2026-02-14 02:17:37.884308 | orchestrator | + output = (known after apply) 2026-02-14 02:17:37.884311 | orchestrator | } 2026-02-14 02:17:37.884314 | orchestrator | 2026-02-14 02:17:37.884317 | orchestrator | # terraform_data.image_node will be created 2026-02-14 02:17:37.884320 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-14 02:17:37.884324 | orchestrator | + id = (known after apply) 2026-02-14 02:17:37.884327 | orchestrator | + input = "Ubuntu 24.04" 2026-02-14 02:17:37.884330 | orchestrator | + output = (known after apply) 2026-02-14 02:17:37.884333 | orchestrator | } 2026-02-14 02:17:37.884336 | orchestrator | 2026-02-14 02:17:37.884339 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-14 02:17:37.884342 | orchestrator | 2026-02-14 02:17:37.884345 | orchestrator | Changes to Outputs: 2026-02-14 02:17:37.884348 | orchestrator | + manager_address = (sensitive value) 2026-02-14 02:17:37.884352 | orchestrator | + private_key = (sensitive value) 2026-02-14 02:17:37.952550 | orchestrator | terraform_data.image_node: Creating... 2026-02-14 02:17:38.055794 | orchestrator | terraform_data.image: Creating... 2026-02-14 02:17:38.056403 | orchestrator | terraform_data.image: Creation complete after 0s [id=9b5ec4c9-ecf1-c0e6-6d47-1f2d8869de03] 2026-02-14 02:17:38.057185 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=d0cb580f-d2c4-a453-a9c0-1c1cbabb4416] 2026-02-14 02:17:38.077473 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-14 02:17:38.078185 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-14 02:17:38.084484 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-14 02:17:38.086153 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-14 02:17:38.087139 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-14 02:17:38.088113 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-14 02:17:38.089272 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-14 02:17:38.091207 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-14 02:17:38.091238 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-14 02:17:38.091246 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-14 02:17:38.561612 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-14 02:17:38.565464 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-14 02:17:38.566063 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-14 02:17:38.570519 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-14 02:17:38.596762 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-14 02:17:38.600825 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-14 02:17:39.321808 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=a8de4a66-5f8c-4243-8e05-5f0d3637882b] 2026-02-14 02:17:39.328425 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-14 02:17:41.696751 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=763dae4f-8aba-40cd-b4e7-eeabad093491] 2026-02-14 02:17:41.700071 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=54e6ca54-a1fe-4396-8891-5cf52f763d40] 2026-02-14 02:17:41.701795 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-14 02:17:41.703904 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-14 02:17:41.721387 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=43152e32-b25a-4e6e-b6b7-c3272099ce67] 2026-02-14 02:17:41.728088 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-14 02:17:41.746087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=2ec12fdb-ec43-4dc2-9206-4086e60213b8] 2026-02-14 02:17:41.746137 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=8657c064-423f-4604-b6db-e42322d0b025] 2026-02-14 02:17:41.748987 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-14 02:17:41.749027 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-14 02:17:41.759780 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f960435b-b83d-47c8-ac31-653544f80bd0] 2026-02-14 02:17:41.763047 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-14 02:17:41.803534 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=89ffb490-ef56-465e-9c2a-8772cc279d48] 2026-02-14 02:17:41.817804 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=600e740f-7698-45cf-9f18-28df3084435e] 2026-02-14 02:17:41.818620 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-14 02:17:41.830611 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-14 02:17:41.832672 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=8661962a120dd01de5471c4c8ddd5b58f499ed7c] 2026-02-14 02:17:41.835069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=f8b6a063-90c4-466f-950a-7ec8689e5fcc] 2026-02-14 02:17:41.837888 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=7cc88698faeb129d67791cf4c5bd04770dc53b58] 2026-02-14 02:17:41.838383 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-14 02:17:42.661402 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=7d6eeb05-e83d-4317-802b-0715782d7f16] 2026-02-14 02:17:42.669751 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=920da8c9-5394-40af-9828-ba71060ccc04] 2026-02-14 02:17:42.676523 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-14 02:17:45.058809 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=582964e9-d5ca-49cb-a5b2-57a438ee9ec9] 2026-02-14 02:17:45.087213 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=01a64ec0-40ea-433b-abd1-e8b343921bd2] 2026-02-14 02:17:45.102808 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=b284434b-c033-46cd-9dae-de97b39c2172] 2026-02-14 02:17:45.149119 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=69aee15b-d447-41d8-b515-509351298397] 2026-02-14 02:17:45.331205 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7] 2026-02-14 02:17:45.340774 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=677d5586-73cd-49dc-a30b-5398ef511889] 2026-02-14 02:17:46.073095 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=01829a5e-48c7-4520-96a3-a368fe243ac8] 2026-02-14 02:17:46.077106 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-14 02:17:46.077194 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-14 02:17:46.078382 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-14 02:17:46.265662 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=80976525-4ef1-4817-97b0-57d0b943f081] 2026-02-14 02:17:46.274373 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-14 02:17:46.275637 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=6e1ae7e0-12ad-48ac-8dea-db9b650ce2f6] 2026-02-14 02:17:46.287131 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-14 02:17:46.290539 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-14 02:17:46.292725 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-14 02:17:46.294988 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-14 02:17:46.297657 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-14 02:17:46.300534 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-14 02:17:46.311618 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-14 02:17:46.319647 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-14 02:17:46.807743 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=053b1163-8226-4570-a6e8-88ae4ddee77b] 2026-02-14 02:17:46.811257 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-14 02:17:46.919202 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=209939a9-c650-4be7-8a42-ef588f58e85d] 2026-02-14 02:17:46.923040 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-14 02:17:46.934092 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=d87f97b4-2ee9-4e64-ae2d-fb0f7b8c47d4] 2026-02-14 02:17:46.939654 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-14 02:17:46.976621 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1733135e-dad3-458c-9bb0-c761a1fa7697] 2026-02-14 02:17:46.979519 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-14 02:17:46.995636 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=6a168c72-6c6d-4abb-bd7e-4ce0dd84807d] 2026-02-14 02:17:46.999054 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-14 02:17:47.027505 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=a798ecbe-a809-42f3-b1aa-de17ef27652a] 2026-02-14 02:17:47.031195 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-14 02:17:47.089781 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=a8590fa5-365e-4505-b69b-4862d3690735] 2026-02-14 02:17:47.098983 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-14 02:17:47.303349 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=0b4b5fc3-cc0b-45fb-b50f-1561c1c75408] 2026-02-14 02:17:47.459451 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b9834feb-ba2b-4b22-a448-cb350d7dc40a] 2026-02-14 02:17:47.483985 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=98edb304-7dee-401c-84bc-aed288629453] 2026-02-14 02:17:47.603665 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=a2b893eb-f16a-46e7-ba87-c8468aed1aa0] 2026-02-14 02:17:47.643285 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=1ab1c77c-d0b1-4fbc-8bad-fb597f5e0eea] 2026-02-14 02:17:47.746203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=1331837c-524d-4178-a860-e307f7433aed] 2026-02-14 02:17:47.805315 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=e9c59429-d515-4afb-9796-4fcce9926a88] 2026-02-14 02:17:47.904445 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=63e88543-8dcb-4892-a25a-9d4827eb04de] 2026-02-14 02:17:48.076294 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=9ad54d20-079f-489a-b71f-fb0175187c2f] 2026-02-14 02:17:49.230448 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=740176d5-6ed2-4fec-9ea2-d9b456bd8e2c] 2026-02-14 02:17:49.243096 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-14 02:17:49.253559 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-14 02:17:49.258777 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-14 02:17:49.266288 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-14 02:17:49.266864 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-14 02:17:49.267277 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-14 02:17:49.276886 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-14 02:17:51.120638 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=42c57948-b307-4ff2-bf35-a76fbc9826e0] 2026-02-14 02:17:51.126131 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-14 02:17:51.132477 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-14 02:17:51.132566 | orchestrator | local_file.inventory: Creating... 2026-02-14 02:17:51.138895 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=1a86501c6b33c1764489493249b2c62122ccc4b0] 2026-02-14 02:17:51.138964 | orchestrator | local_file.inventory: Creation complete after 0s [id=837231669a6798f35e9adf98952cc0ddcc725bfe] 2026-02-14 02:17:52.033252 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=42c57948-b307-4ff2-bf35-a76fbc9826e0] 2026-02-14 02:17:59.255726 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-14 02:17:59.260991 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-14 02:17:59.274254 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-14 02:17:59.275400 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-14 02:17:59.278749 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-14 02:17:59.278847 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-14 02:18:09.264617 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-14 02:18:09.264669 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-14 02:18:09.274840 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-14 02:18:09.275912 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-14 02:18:09.279209 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-14 02:18:09.279260 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-14 02:18:09.842988 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=00074ad4-cc09-4790-b9f7-b49846631a3f] 2026-02-14 02:18:09.847455 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=6334f023-ef62-482e-bb90-623b0fbd18e5] 2026-02-14 02:18:09.908857 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=f15b755d-12e1-4f10-bca1-69cdb1dfa55f] 2026-02-14 02:18:09.994105 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=7c56652b-2675-49bb-b694-b069cd99ebcc] 2026-02-14 02:18:19.279845 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-14 02:18:19.279898 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-14 02:18:20.278096 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=70a6b16f-faba-4236-ade5-bdc4eac1e722] 2026-02-14 02:18:20.306362 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=ff5b24da-6a09-4e95-8745-159e03c43f3e] 2026-02-14 02:18:20.316922 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-14 02:18:20.319132 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3048808448122539576] 2026-02-14 02:18:20.321938 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-14 02:18:20.326903 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-14 02:18:20.328859 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-14 02:18:20.331371 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-14 02:18:20.333218 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-14 02:18:20.336798 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-14 02:18:20.340108 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-14 02:18:20.344825 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-14 02:18:20.361258 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-14 02:18:20.362440 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-14 02:18:24.343763 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=6334f023-ef62-482e-bb90-623b0fbd18e5/43152e32-b25a-4e6e-b6b7-c3272099ce67] 2026-02-14 02:18:24.348585 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=70a6b16f-faba-4236-ade5-bdc4eac1e722/8657c064-423f-4604-b6db-e42322d0b025] 2026-02-14 02:18:24.363908 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=00074ad4-cc09-4790-b9f7-b49846631a3f/f960435b-b83d-47c8-ac31-653544f80bd0] 2026-02-14 02:18:24.378931 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=70a6b16f-faba-4236-ade5-bdc4eac1e722/2ec12fdb-ec43-4dc2-9206-4086e60213b8] 2026-02-14 02:18:24.387892 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=00074ad4-cc09-4790-b9f7-b49846631a3f/600e740f-7698-45cf-9f18-28df3084435e] 2026-02-14 02:18:24.429299 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=6334f023-ef62-482e-bb90-623b0fbd18e5/54e6ca54-a1fe-4396-8891-5cf52f763d40] 2026-02-14 02:18:30.339146 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Still creating... [10s elapsed] 2026-02-14 02:18:30.354434 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Still creating... [10s elapsed] 2026-02-14 02:18:30.361578 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-14 02:18:30.362719 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Still creating... [10s elapsed] 2026-02-14 02:18:30.456756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=6334f023-ef62-482e-bb90-623b0fbd18e5/89ffb490-ef56-465e-9c2a-8772cc279d48] 2026-02-14 02:18:30.477866 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=70a6b16f-faba-4236-ade5-bdc4eac1e722/763dae4f-8aba-40cd-b4e7-eeabad093491] 2026-02-14 02:18:30.546875 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 11s [id=00074ad4-cc09-4790-b9f7-b49846631a3f/f8b6a063-90c4-466f-950a-7ec8689e5fcc] 2026-02-14 02:18:40.362243 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-14 02:18:40.697732 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=9749ca19-a4f0-4b96-a7a2-4b88692781d0] 2026-02-14 02:18:40.713254 | orchestrator | 2026-02-14 02:18:40.713322 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-14 02:18:40.713340 | orchestrator | 2026-02-14 02:18:40.713347 | orchestrator | Outputs: 2026-02-14 02:18:40.713353 | orchestrator | 2026-02-14 02:18:40.713359 | orchestrator | manager_address = 2026-02-14 02:18:40.713365 | orchestrator | private_key = 2026-02-14 02:18:40.786120 | orchestrator | ok: Runtime: 0:01:08.557508 2026-02-14 02:18:40.810736 | 2026-02-14 02:18:40.810882 | TASK [Fetch manager address] 2026-02-14 02:18:41.306732 | orchestrator | ok 2026-02-14 02:18:41.316836 | 2026-02-14 02:18:41.316964 | TASK [Set manager_host address] 2026-02-14 02:18:41.397199 | orchestrator | ok 2026-02-14 02:18:41.407572 | 2026-02-14 02:18:41.407725 | LOOP [Update ansible collections] 2026-02-14 02:18:43.818490 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-14 02:18:43.818936 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-14 02:18:43.819004 | orchestrator | Starting galaxy collection install process 2026-02-14 02:18:43.819048 | orchestrator | Process install dependency map 2026-02-14 02:18:43.819087 | orchestrator | Starting collection install process 2026-02-14 02:18:43.819122 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-14 02:18:43.819159 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-14 02:18:43.819216 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-14 02:18:43.819294 | orchestrator | ok: Item: commons Runtime: 0:00:02.045169 2026-02-14 02:18:44.687531 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-14 02:18:44.687720 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-14 02:18:44.687774 | orchestrator | Starting galaxy collection install process 2026-02-14 02:18:44.687814 | orchestrator | Process install dependency map 2026-02-14 02:18:44.687851 | orchestrator | Starting collection install process 2026-02-14 02:18:44.687885 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-14 02:18:44.687919 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-14 02:18:44.687954 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-14 02:18:44.688011 | orchestrator | ok: Item: services Runtime: 0:00:00.639028 2026-02-14 02:18:44.707534 | 2026-02-14 02:18:44.707757 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-14 02:18:55.202446 | orchestrator | ok 2026-02-14 02:18:55.211075 | 2026-02-14 02:18:55.211183 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-14 02:19:55.245307 | orchestrator | ok 2026-02-14 02:19:55.253333 | 2026-02-14 02:19:55.253446 | TASK [Fetch manager ssh hostkey] 2026-02-14 02:19:56.840097 | orchestrator | Output suppressed because no_log was given 2026-02-14 02:19:56.855471 | 2026-02-14 02:19:56.855741 | TASK [Get ssh keypair from terraform environment] 2026-02-14 02:19:57.391315 | orchestrator | ok: Runtime: 0:00:00.005228 2026-02-14 02:19:57.409257 | 2026-02-14 02:19:57.409457 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-14 02:19:57.442391 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-14 02:19:57.453153 | 2026-02-14 02:19:57.453285 | TASK [Run manager part 0] 2026-02-14 02:19:58.729253 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-14 02:19:58.797559 | orchestrator | 2026-02-14 02:19:58.797611 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-14 02:19:58.797620 | orchestrator | 2026-02-14 02:19:58.797635 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-14 02:20:00.548074 | orchestrator | ok: [testbed-manager] 2026-02-14 02:20:00.548116 | orchestrator | 2026-02-14 02:20:00.548138 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-14 02:20:00.548149 | orchestrator | 2026-02-14 02:20:00.548159 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:20:02.240199 | orchestrator | ok: [testbed-manager] 2026-02-14 02:20:02.240246 | orchestrator | 2026-02-14 02:20:02.240255 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-14 02:20:02.820391 | orchestrator | ok: [testbed-manager] 2026-02-14 02:20:02.820432 | orchestrator | 2026-02-14 02:20:02.820439 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-14 02:20:02.859481 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.859521 | orchestrator | 2026-02-14 02:20:02.859529 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-14 02:20:02.885312 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.885365 | orchestrator | 2026-02-14 02:20:02.885375 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-14 02:20:02.910705 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.910761 | orchestrator | 2026-02-14 02:20:02.910772 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-14 02:20:02.935002 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.935053 | orchestrator | 2026-02-14 02:20:02.935063 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-14 02:20:02.961483 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.961523 | orchestrator | 2026-02-14 02:20:02.961530 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-14 02:20:02.986918 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:02.986963 | orchestrator | 2026-02-14 02:20:02.986972 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-14 02:20:03.018588 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:20:03.018653 | orchestrator | 2026-02-14 02:20:03.018661 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-14 02:20:03.663244 | orchestrator | changed: [testbed-manager] 2026-02-14 02:20:03.663284 | orchestrator | 2026-02-14 02:20:03.663290 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-14 02:22:25.561356 | orchestrator | changed: [testbed-manager] 2026-02-14 02:22:25.561394 | orchestrator | 2026-02-14 02:22:25.561401 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-14 02:23:44.505668 | orchestrator | changed: [testbed-manager] 2026-02-14 02:23:44.505699 | orchestrator | 2026-02-14 02:23:44.505705 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-14 02:24:05.082832 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:05.082889 | orchestrator | 2026-02-14 02:24:05.082901 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-14 02:24:13.257057 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:13.257093 | orchestrator | 2026-02-14 02:24:13.257100 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-14 02:24:13.287786 | orchestrator | ok: [testbed-manager] 2026-02-14 02:24:13.287822 | orchestrator | 2026-02-14 02:24:13.287827 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-14 02:24:13.939045 | orchestrator | ok: [testbed-manager] 2026-02-14 02:24:13.939094 | orchestrator | 2026-02-14 02:24:13.939104 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-14 02:24:14.489242 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:14.489285 | orchestrator | 2026-02-14 02:24:14.489290 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-14 02:24:20.977076 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:20.977127 | orchestrator | 2026-02-14 02:24:20.977144 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-14 02:24:27.861426 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:27.861484 | orchestrator | 2026-02-14 02:24:27.861507 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-14 02:24:30.529357 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:30.529411 | orchestrator | 2026-02-14 02:24:30.529417 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-14 02:24:32.890773 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:32.890841 | orchestrator | 2026-02-14 02:24:32.890848 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-14 02:24:34.083281 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-14 02:24:34.083359 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-14 02:24:34.083368 | orchestrator | 2026-02-14 02:24:34.083377 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-14 02:24:34.122937 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-14 02:24:34.123004 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-14 02:24:34.123014 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-14 02:24:34.123021 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-14 02:24:42.229397 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-14 02:24:42.229636 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-14 02:24:42.229655 | orchestrator | 2026-02-14 02:24:42.229663 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-14 02:24:42.879821 | orchestrator | changed: [testbed-manager] 2026-02-14 02:24:42.879885 | orchestrator | 2026-02-14 02:24:42.879893 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-14 02:25:04.083432 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-14 02:25:04.083535 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-14 02:25:04.083545 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-14 02:25:04.083550 | orchestrator | 2026-02-14 02:25:04.083555 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-14 02:25:06.775181 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-14 02:25:06.775262 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-14 02:25:06.775272 | orchestrator | 2026-02-14 02:25:06.775281 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-14 02:25:06.775290 | orchestrator | 2026-02-14 02:25:06.775298 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:25:08.296345 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:08.296410 | orchestrator | 2026-02-14 02:25:08.296427 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-14 02:25:08.341330 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:08.341379 | orchestrator | 2026-02-14 02:25:08.341387 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-14 02:25:08.419818 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:08.419911 | orchestrator | 2026-02-14 02:25:08.419925 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-14 02:25:09.370252 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:09.370338 | orchestrator | 2026-02-14 02:25:09.370352 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-14 02:25:10.114873 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:10.114940 | orchestrator | 2026-02-14 02:25:10.114947 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-14 02:25:11.583251 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-14 02:25:11.583326 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-14 02:25:11.583341 | orchestrator | 2026-02-14 02:25:11.583369 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-14 02:25:13.052068 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:13.052158 | orchestrator | 2026-02-14 02:25:13.052167 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-14 02:25:14.789076 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:25:14.789114 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-14 02:25:14.789119 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:25:14.789124 | orchestrator | 2026-02-14 02:25:14.789130 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-14 02:25:14.841338 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:14.841389 | orchestrator | 2026-02-14 02:25:14.841399 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-14 02:25:14.910871 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:14.910907 | orchestrator | 2026-02-14 02:25:14.910915 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-14 02:25:15.495287 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:15.495331 | orchestrator | 2026-02-14 02:25:15.495340 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-14 02:25:15.563666 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:15.563702 | orchestrator | 2026-02-14 02:25:15.563708 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-14 02:25:16.615386 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:25:16.615426 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:16.615432 | orchestrator | 2026-02-14 02:25:16.615437 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-14 02:25:16.646167 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:16.646250 | orchestrator | 2026-02-14 02:25:16.646268 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-14 02:25:16.676292 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:16.676355 | orchestrator | 2026-02-14 02:25:16.676362 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-14 02:25:16.704021 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:16.704082 | orchestrator | 2026-02-14 02:25:16.704090 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-14 02:25:16.768251 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:16.768312 | orchestrator | 2026-02-14 02:25:16.768319 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-14 02:25:17.563499 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:17.563535 | orchestrator | 2026-02-14 02:25:17.563542 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-14 02:25:17.563547 | orchestrator | 2026-02-14 02:25:17.563552 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:25:18.987844 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:18.987882 | orchestrator | 2026-02-14 02:25:18.987887 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-14 02:25:20.022637 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:20.022709 | orchestrator | 2026-02-14 02:25:20.022723 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:25:20.022736 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-14 02:25:20.022757 | orchestrator | 2026-02-14 02:25:20.644398 | orchestrator | ok: Runtime: 0:05:22.371726 2026-02-14 02:25:20.654455 | 2026-02-14 02:25:20.654571 | TASK [Point out that the log in on the manager is now possible] 2026-02-14 02:25:20.689346 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-14 02:25:20.697901 | 2026-02-14 02:25:20.698011 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-14 02:25:20.738523 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-14 02:25:20.747076 | 2026-02-14 02:25:20.747205 | TASK [Run manager part 1 + 2] 2026-02-14 02:25:21.701513 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-14 02:25:21.774340 | orchestrator | 2026-02-14 02:25:21.774405 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-14 02:25:21.774436 | orchestrator | 2026-02-14 02:25:21.774497 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:25:24.477863 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:24.477916 | orchestrator | 2026-02-14 02:25:24.477939 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-14 02:25:24.513592 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:24.513649 | orchestrator | 2026-02-14 02:25:24.513660 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-14 02:25:24.549000 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:24.549048 | orchestrator | 2026-02-14 02:25:24.549056 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-14 02:25:24.588595 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:24.588646 | orchestrator | 2026-02-14 02:25:24.588655 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-14 02:25:24.670209 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:24.670277 | orchestrator | 2026-02-14 02:25:24.670292 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-14 02:25:24.762042 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:24.762104 | orchestrator | 2026-02-14 02:25:24.762113 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-14 02:25:24.814352 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-14 02:25:24.814408 | orchestrator | 2026-02-14 02:25:24.814414 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-14 02:25:25.791250 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:25.791322 | orchestrator | 2026-02-14 02:25:25.791338 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-14 02:25:25.849973 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:25.850061 | orchestrator | 2026-02-14 02:25:25.850072 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-14 02:25:27.407137 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:27.407211 | orchestrator | 2026-02-14 02:25:27.407226 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-14 02:25:27.991555 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:27.991627 | orchestrator | 2026-02-14 02:25:27.991640 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-14 02:25:29.252061 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:29.252165 | orchestrator | 2026-02-14 02:25:29.252186 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-14 02:25:46.452014 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:46.452115 | orchestrator | 2026-02-14 02:25:46.452132 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-14 02:25:47.182347 | orchestrator | ok: [testbed-manager] 2026-02-14 02:25:47.182395 | orchestrator | 2026-02-14 02:25:47.182406 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-14 02:25:47.226944 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:25:47.226987 | orchestrator | 2026-02-14 02:25:47.226995 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-14 02:25:48.212017 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:48.212115 | orchestrator | 2026-02-14 02:25:48.212134 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-14 02:25:49.253998 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:49.254167 | orchestrator | 2026-02-14 02:25:49.254180 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-14 02:25:49.960860 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:49.960984 | orchestrator | 2026-02-14 02:25:49.961010 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-14 02:25:50.004948 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-14 02:25:50.005067 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-14 02:25:50.005082 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-14 02:25:50.005090 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-14 02:25:52.850119 | orchestrator | changed: [testbed-manager] 2026-02-14 02:25:52.850312 | orchestrator | 2026-02-14 02:25:52.850325 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-14 02:26:03.835198 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-14 02:26:03.835328 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-14 02:26:03.835350 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-14 02:26:03.835363 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-14 02:26:03.835383 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-14 02:26:03.835395 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-14 02:26:03.835406 | orchestrator | 2026-02-14 02:26:03.835447 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-14 02:26:04.945394 | orchestrator | changed: [testbed-manager] 2026-02-14 02:26:04.945479 | orchestrator | 2026-02-14 02:26:04.945488 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-14 02:26:04.987572 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:26:04.987693 | orchestrator | 2026-02-14 02:26:04.987724 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-14 02:26:08.041744 | orchestrator | changed: [testbed-manager] 2026-02-14 02:26:08.041851 | orchestrator | 2026-02-14 02:26:08.041873 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-14 02:26:08.085398 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:26:08.085535 | orchestrator | 2026-02-14 02:26:08.085553 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-14 02:28:10.579731 | orchestrator | changed: [testbed-manager] 2026-02-14 02:28:10.579826 | orchestrator | 2026-02-14 02:28:10.579836 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-14 02:28:11.869517 | orchestrator | ok: [testbed-manager] 2026-02-14 02:28:11.869568 | orchestrator | 2026-02-14 02:28:11.869575 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:28:11.869580 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-14 02:28:11.869585 | orchestrator | 2026-02-14 02:28:12.443401 | orchestrator | ok: Runtime: 0:02:50.913956 2026-02-14 02:28:12.459416 | 2026-02-14 02:28:12.459569 | TASK [Reboot manager] 2026-02-14 02:28:13.996387 | orchestrator | ok: Runtime: 0:00:01.040677 2026-02-14 02:28:14.005019 | 2026-02-14 02:28:14.005136 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-14 02:28:30.473043 | orchestrator | ok 2026-02-14 02:28:30.481135 | 2026-02-14 02:28:30.481247 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-14 02:29:30.529522 | orchestrator | ok 2026-02-14 02:29:30.539235 | 2026-02-14 02:29:30.539367 | TASK [Deploy manager + bootstrap nodes] 2026-02-14 02:29:33.386403 | orchestrator | 2026-02-14 02:29:33.386531 | orchestrator | # DEPLOY MANAGER 2026-02-14 02:29:33.386542 | orchestrator | 2026-02-14 02:29:33.386548 | orchestrator | + set -e 2026-02-14 02:29:33.386554 | orchestrator | + echo 2026-02-14 02:29:33.386560 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-14 02:29:33.386567 | orchestrator | + echo 2026-02-14 02:29:33.386590 | orchestrator | + cat /opt/manager-vars.sh 2026-02-14 02:29:33.390095 | orchestrator | export NUMBER_OF_NODES=6 2026-02-14 02:29:33.390147 | orchestrator | 2026-02-14 02:29:33.390153 | orchestrator | export CEPH_VERSION=reef 2026-02-14 02:29:33.390159 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-14 02:29:33.390164 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-14 02:29:33.390177 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-14 02:29:33.390181 | orchestrator | 2026-02-14 02:29:33.390190 | orchestrator | export ARA=false 2026-02-14 02:29:33.390194 | orchestrator | export DEPLOY_MODE=manager 2026-02-14 02:29:33.390202 | orchestrator | export TEMPEST=false 2026-02-14 02:29:33.390206 | orchestrator | export IS_ZUUL=true 2026-02-14 02:29:33.390210 | orchestrator | 2026-02-14 02:29:33.390217 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:29:33.390222 | orchestrator | export EXTERNAL_API=false 2026-02-14 02:29:33.390226 | orchestrator | 2026-02-14 02:29:33.390230 | orchestrator | export IMAGE_USER=ubuntu 2026-02-14 02:29:33.390237 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-14 02:29:33.390241 | orchestrator | 2026-02-14 02:29:33.390245 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-14 02:29:33.390412 | orchestrator | 2026-02-14 02:29:33.390420 | orchestrator | + echo 2026-02-14 02:29:33.390425 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 02:29:33.392047 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 02:29:33.392060 | orchestrator | ++ INTERACTIVE=false 2026-02-14 02:29:33.392065 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 02:29:33.392071 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 02:29:33.392345 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 02:29:33.392353 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 02:29:33.392357 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 02:29:33.392361 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 02:29:33.392365 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 02:29:33.392369 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 02:29:33.392373 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 02:29:33.392406 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 02:29:33.392412 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 02:29:33.392415 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 02:29:33.392427 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 02:29:33.392431 | orchestrator | ++ export ARA=false 2026-02-14 02:29:33.392435 | orchestrator | ++ ARA=false 2026-02-14 02:29:33.392439 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 02:29:33.392443 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 02:29:33.392448 | orchestrator | ++ export TEMPEST=false 2026-02-14 02:29:33.392452 | orchestrator | ++ TEMPEST=false 2026-02-14 02:29:33.392456 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 02:29:33.392460 | orchestrator | ++ IS_ZUUL=true 2026-02-14 02:29:33.392646 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:29:33.392655 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:29:33.392662 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 02:29:33.392667 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 02:29:33.392673 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 02:29:33.392679 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 02:29:33.392703 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 02:29:33.392709 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 02:29:33.392718 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 02:29:33.392726 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 02:29:33.392961 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-14 02:29:33.456051 | orchestrator | + docker version 2026-02-14 02:29:33.582540 | orchestrator | Client: Docker Engine - Community 2026-02-14 02:29:33.582666 | orchestrator | Version: 27.5.1 2026-02-14 02:29:33.582691 | orchestrator | API version: 1.47 2026-02-14 02:29:33.582708 | orchestrator | Go version: go1.22.11 2026-02-14 02:29:33.582723 | orchestrator | Git commit: 9f9e405 2026-02-14 02:29:33.582739 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-14 02:29:33.582757 | orchestrator | OS/Arch: linux/amd64 2026-02-14 02:29:33.582772 | orchestrator | Context: default 2026-02-14 02:29:33.582788 | orchestrator | 2026-02-14 02:29:33.582805 | orchestrator | Server: Docker Engine - Community 2026-02-14 02:29:33.582822 | orchestrator | Engine: 2026-02-14 02:29:33.582839 | orchestrator | Version: 27.5.1 2026-02-14 02:29:33.582857 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-14 02:29:33.582911 | orchestrator | Go version: go1.22.11 2026-02-14 02:29:33.582928 | orchestrator | Git commit: 4c9b3b0 2026-02-14 02:29:33.582945 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-14 02:29:33.582960 | orchestrator | OS/Arch: linux/amd64 2026-02-14 02:29:33.582976 | orchestrator | Experimental: false 2026-02-14 02:29:33.582992 | orchestrator | containerd: 2026-02-14 02:29:33.583010 | orchestrator | Version: v2.2.1 2026-02-14 02:29:33.583026 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-14 02:29:33.583043 | orchestrator | runc: 2026-02-14 02:29:33.583058 | orchestrator | Version: 1.3.4 2026-02-14 02:29:33.583073 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-14 02:29:33.583090 | orchestrator | docker-init: 2026-02-14 02:29:33.583106 | orchestrator | Version: 0.19.0 2026-02-14 02:29:33.583123 | orchestrator | GitCommit: de40ad0 2026-02-14 02:29:33.586646 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-14 02:29:33.596799 | orchestrator | + set -e 2026-02-14 02:29:33.596876 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 02:29:33.596891 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 02:29:33.597352 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 02:29:33.597367 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 02:29:33.597377 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 02:29:33.597387 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 02:29:33.597398 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 02:29:33.597408 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 02:29:33.597417 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 02:29:33.597427 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 02:29:33.597437 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 02:29:33.597446 | orchestrator | ++ export ARA=false 2026-02-14 02:29:33.597456 | orchestrator | ++ ARA=false 2026-02-14 02:29:33.597466 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 02:29:33.597475 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 02:29:33.597485 | orchestrator | ++ export TEMPEST=false 2026-02-14 02:29:33.597494 | orchestrator | ++ TEMPEST=false 2026-02-14 02:29:33.597504 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 02:29:33.597513 | orchestrator | ++ IS_ZUUL=true 2026-02-14 02:29:33.597522 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:29:33.597532 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:29:33.597541 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 02:29:33.597551 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 02:29:33.597560 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 02:29:33.597569 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 02:29:33.597579 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 02:29:33.597588 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 02:29:33.597598 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 02:29:33.597607 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 02:29:33.597619 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 02:29:33.597635 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 02:29:33.597646 | orchestrator | ++ INTERACTIVE=false 2026-02-14 02:29:33.597655 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 02:29:33.597669 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 02:29:33.597679 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-14 02:29:33.597688 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-14 02:29:33.603969 | orchestrator | + set -e 2026-02-14 02:29:33.604044 | orchestrator | + VERSION=9.5.0 2026-02-14 02:29:33.604063 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-14 02:29:33.611158 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-14 02:29:33.611224 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-14 02:29:33.616575 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-14 02:29:33.621087 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-14 02:29:33.628749 | orchestrator | /opt/configuration ~ 2026-02-14 02:29:33.628810 | orchestrator | + set -e 2026-02-14 02:29:33.628818 | orchestrator | + pushd /opt/configuration 2026-02-14 02:29:33.628825 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 02:29:33.630097 | orchestrator | + source /opt/venv/bin/activate 2026-02-14 02:29:33.631162 | orchestrator | ++ deactivate nondestructive 2026-02-14 02:29:33.631231 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:33.631244 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:33.631276 | orchestrator | ++ hash -r 2026-02-14 02:29:33.631285 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:33.631381 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-14 02:29:33.631407 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-14 02:29:33.631417 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-14 02:29:33.631437 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-14 02:29:33.631453 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-14 02:29:33.631475 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-14 02:29:33.631491 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-14 02:29:33.631523 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:29:33.631539 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:29:33.631553 | orchestrator | ++ export PATH 2026-02-14 02:29:33.631569 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:33.631584 | orchestrator | ++ '[' -z '' ']' 2026-02-14 02:29:33.631598 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-14 02:29:33.631613 | orchestrator | ++ PS1='(venv) ' 2026-02-14 02:29:33.631628 | orchestrator | ++ export PS1 2026-02-14 02:29:33.631645 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-14 02:29:33.631664 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-14 02:29:33.631676 | orchestrator | ++ hash -r 2026-02-14 02:29:33.631687 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-14 02:29:34.981184 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-14 02:29:34.982222 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-14 02:29:35.006209 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-14 02:29:35.006349 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-14 02:29:35.006367 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-14 02:29:35.006377 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-14 02:29:35.006389 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-14 02:29:35.006398 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-14 02:29:35.006409 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-14 02:29:35.043106 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-14 02:29:35.044269 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-14 02:29:35.046416 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-14 02:29:35.047546 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-14 02:29:35.052026 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-14 02:29:35.293344 | orchestrator | ++ which gilt 2026-02-14 02:29:35.296549 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-14 02:29:35.296664 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-14 02:29:35.604684 | orchestrator | osism.cfg-generics: 2026-02-14 02:29:35.787809 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-14 02:29:35.787934 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-14 02:29:35.788037 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-14 02:29:35.788059 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-14 02:29:36.464880 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-14 02:29:36.478693 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-14 02:29:36.843739 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-14 02:29:36.914471 | orchestrator | ~ 2026-02-14 02:29:36.914571 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 02:29:36.914586 | orchestrator | + deactivate 2026-02-14 02:29:36.914599 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-14 02:29:36.914613 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:29:36.914624 | orchestrator | + export PATH 2026-02-14 02:29:36.914635 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-14 02:29:36.914646 | orchestrator | + '[' -n '' ']' 2026-02-14 02:29:36.914660 | orchestrator | + hash -r 2026-02-14 02:29:36.914671 | orchestrator | + '[' -n '' ']' 2026-02-14 02:29:36.914682 | orchestrator | + unset VIRTUAL_ENV 2026-02-14 02:29:36.914693 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-14 02:29:36.914704 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-14 02:29:36.914715 | orchestrator | + unset -f deactivate 2026-02-14 02:29:36.914726 | orchestrator | + popd 2026-02-14 02:29:36.916148 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-14 02:29:36.916180 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-14 02:29:36.916981 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-14 02:29:36.968928 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 02:29:36.969054 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-14 02:29:36.969177 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-14 02:29:37.030600 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 02:29:37.030985 | orchestrator | ++ semver 2024.2 2025.1 2026-02-14 02:29:37.078811 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 02:29:37.078904 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-14 02:29:37.157725 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 02:29:37.157795 | orchestrator | + source /opt/venv/bin/activate 2026-02-14 02:29:37.158083 | orchestrator | ++ deactivate nondestructive 2026-02-14 02:29:37.158413 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:37.158917 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:37.159001 | orchestrator | ++ hash -r 2026-02-14 02:29:37.159146 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:37.159165 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-14 02:29:37.160497 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-14 02:29:37.160551 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-14 02:29:37.160560 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-14 02:29:37.160565 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-14 02:29:37.160571 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-14 02:29:37.160576 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-14 02:29:37.160582 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:29:37.160604 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:29:37.160610 | orchestrator | ++ export PATH 2026-02-14 02:29:37.160615 | orchestrator | ++ '[' -n '' ']' 2026-02-14 02:29:37.160620 | orchestrator | ++ '[' -z '' ']' 2026-02-14 02:29:37.160626 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-14 02:29:37.160634 | orchestrator | ++ PS1='(venv) ' 2026-02-14 02:29:37.160642 | orchestrator | ++ export PS1 2026-02-14 02:29:37.160677 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-14 02:29:37.160689 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-14 02:29:37.160697 | orchestrator | ++ hash -r 2026-02-14 02:29:37.160713 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-14 02:29:38.416200 | orchestrator | 2026-02-14 02:29:38.416352 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-14 02:29:38.416370 | orchestrator | 2026-02-14 02:29:38.416383 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-14 02:29:39.076897 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:39.076985 | orchestrator | 2026-02-14 02:29:39.077001 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-14 02:29:40.205559 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:40.205694 | orchestrator | 2026-02-14 02:29:40.205716 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-14 02:29:40.205776 | orchestrator | 2026-02-14 02:29:40.205794 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:29:42.791830 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:42.791947 | orchestrator | 2026-02-14 02:29:42.791972 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-14 02:29:42.851872 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:42.851956 | orchestrator | 2026-02-14 02:29:42.851968 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-14 02:29:43.377025 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:43.377140 | orchestrator | 2026-02-14 02:29:43.377161 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-14 02:29:43.419761 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:29:43.419879 | orchestrator | 2026-02-14 02:29:43.419902 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-14 02:29:43.800973 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:43.801074 | orchestrator | 2026-02-14 02:29:43.801090 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-14 02:29:44.228211 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:44.228412 | orchestrator | 2026-02-14 02:29:44.228447 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-14 02:29:44.378107 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:29:44.378202 | orchestrator | 2026-02-14 02:29:44.378218 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-14 02:29:44.378231 | orchestrator | 2026-02-14 02:29:44.378243 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:29:46.426839 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:46.426969 | orchestrator | 2026-02-14 02:29:46.426996 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-14 02:29:46.562249 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-14 02:29:46.562410 | orchestrator | 2026-02-14 02:29:46.562427 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-14 02:29:46.638258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-14 02:29:46.638381 | orchestrator | 2026-02-14 02:29:46.638397 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-14 02:29:47.912521 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-14 02:29:47.912635 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-14 02:29:47.912653 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-14 02:29:47.912666 | orchestrator | 2026-02-14 02:29:47.912681 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-14 02:29:49.992062 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-14 02:29:49.992211 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-14 02:29:49.992240 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-14 02:29:49.992262 | orchestrator | 2026-02-14 02:29:49.992276 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-14 02:29:50.700593 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:29:50.700694 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:50.700711 | orchestrator | 2026-02-14 02:29:50.700725 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-14 02:29:51.394279 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:29:51.394438 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:51.394457 | orchestrator | 2026-02-14 02:29:51.394471 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-14 02:29:51.448783 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:29:51.448883 | orchestrator | 2026-02-14 02:29:51.448900 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-14 02:29:51.858358 | orchestrator | ok: [testbed-manager] 2026-02-14 02:29:51.858487 | orchestrator | 2026-02-14 02:29:51.858516 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-14 02:29:51.945085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-14 02:29:51.945183 | orchestrator | 2026-02-14 02:29:51.945199 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-14 02:29:53.165038 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:53.165151 | orchestrator | 2026-02-14 02:29:53.165170 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-14 02:29:54.140796 | orchestrator | changed: [testbed-manager] 2026-02-14 02:29:54.140907 | orchestrator | 2026-02-14 02:29:54.140922 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-14 02:30:11.917746 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:11.917871 | orchestrator | 2026-02-14 02:30:11.917893 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-14 02:30:11.972699 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:30:11.972771 | orchestrator | 2026-02-14 02:30:11.972798 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-14 02:30:11.972806 | orchestrator | 2026-02-14 02:30:11.972813 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:30:13.964434 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:13.964542 | orchestrator | 2026-02-14 02:30:13.964558 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-14 02:30:14.115403 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-14 02:30:14.115500 | orchestrator | 2026-02-14 02:30:14.115514 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-14 02:30:14.190460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 02:30:14.190582 | orchestrator | 2026-02-14 02:30:14.190608 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-14 02:30:17.155992 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:17.156080 | orchestrator | 2026-02-14 02:30:17.156088 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-14 02:30:17.209583 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:17.209677 | orchestrator | 2026-02-14 02:30:17.209689 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-14 02:30:17.391507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-14 02:30:17.391583 | orchestrator | 2026-02-14 02:30:17.391590 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-14 02:30:20.441402 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-14 02:30:20.441501 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-14 02:30:20.441513 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-14 02:30:20.441522 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-14 02:30:20.441530 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-14 02:30:20.441539 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-14 02:30:20.441547 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-14 02:30:20.441555 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-14 02:30:20.441564 | orchestrator | 2026-02-14 02:30:20.441573 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-14 02:30:21.127865 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:21.127972 | orchestrator | 2026-02-14 02:30:21.127994 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-14 02:30:21.818584 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:21.818674 | orchestrator | 2026-02-14 02:30:21.818687 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-14 02:30:21.888245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-14 02:30:21.888397 | orchestrator | 2026-02-14 02:30:21.888443 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-14 02:30:23.333214 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-14 02:30:23.333359 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-14 02:30:23.333378 | orchestrator | 2026-02-14 02:30:23.333392 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-14 02:30:24.021023 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:24.021112 | orchestrator | 2026-02-14 02:30:24.021124 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-14 02:30:24.087529 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:30:24.087611 | orchestrator | 2026-02-14 02:30:24.087621 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-14 02:30:24.178568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-14 02:30:24.178673 | orchestrator | 2026-02-14 02:30:24.178691 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-14 02:30:24.845403 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:24.845530 | orchestrator | 2026-02-14 02:30:24.845549 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-14 02:30:24.925528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-14 02:30:24.925614 | orchestrator | 2026-02-14 02:30:24.925625 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-14 02:30:26.427611 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:30:26.427732 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:30:26.427753 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:26.427769 | orchestrator | 2026-02-14 02:30:26.427783 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-14 02:30:27.159533 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:27.159610 | orchestrator | 2026-02-14 02:30:27.159618 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-14 02:30:27.208921 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:30:27.209026 | orchestrator | 2026-02-14 02:30:27.209041 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-14 02:30:27.324848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-14 02:30:27.324925 | orchestrator | 2026-02-14 02:30:27.324935 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-14 02:30:27.904521 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:27.904656 | orchestrator | 2026-02-14 02:30:27.904682 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-14 02:30:28.332050 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:28.332155 | orchestrator | 2026-02-14 02:30:28.332172 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-14 02:30:29.665499 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-14 02:30:29.665609 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-14 02:30:29.665626 | orchestrator | 2026-02-14 02:30:29.665639 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-14 02:30:30.370862 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:30.370959 | orchestrator | 2026-02-14 02:30:30.370972 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-14 02:30:30.780692 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:30.780817 | orchestrator | 2026-02-14 02:30:30.780844 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-14 02:30:31.156339 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:31.156423 | orchestrator | 2026-02-14 02:30:31.156432 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-14 02:30:31.204162 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:30:31.204242 | orchestrator | 2026-02-14 02:30:31.204253 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-14 02:30:31.282095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-14 02:30:31.282243 | orchestrator | 2026-02-14 02:30:31.282260 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-14 02:30:31.339080 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:31.339181 | orchestrator | 2026-02-14 02:30:31.339197 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-14 02:30:33.521929 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-14 02:30:33.522087 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-14 02:30:33.522103 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-14 02:30:33.522112 | orchestrator | 2026-02-14 02:30:33.522121 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-14 02:30:34.334798 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:34.334911 | orchestrator | 2026-02-14 02:30:34.334928 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-14 02:30:35.127708 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:35.127780 | orchestrator | 2026-02-14 02:30:35.127787 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-14 02:30:35.906420 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:35.906515 | orchestrator | 2026-02-14 02:30:35.906529 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-14 02:30:35.990561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-14 02:30:35.990656 | orchestrator | 2026-02-14 02:30:35.990674 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-14 02:30:36.036677 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:36.036797 | orchestrator | 2026-02-14 02:30:36.036821 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-14 02:30:36.810934 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-14 02:30:36.811059 | orchestrator | 2026-02-14 02:30:36.811089 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-14 02:30:36.898595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-14 02:30:36.898696 | orchestrator | 2026-02-14 02:30:36.898712 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-14 02:30:37.724231 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:37.724387 | orchestrator | 2026-02-14 02:30:37.724405 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-14 02:30:38.375712 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:38.375818 | orchestrator | 2026-02-14 02:30:38.375835 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-14 02:30:38.435550 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:30:38.435654 | orchestrator | 2026-02-14 02:30:38.435670 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-14 02:30:38.508608 | orchestrator | ok: [testbed-manager] 2026-02-14 02:30:38.508722 | orchestrator | 2026-02-14 02:30:38.508745 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-14 02:30:39.380642 | orchestrator | changed: [testbed-manager] 2026-02-14 02:30:39.380736 | orchestrator | 2026-02-14 02:30:39.380750 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-14 02:31:57.778860 | orchestrator | changed: [testbed-manager] 2026-02-14 02:31:57.778969 | orchestrator | 2026-02-14 02:31:57.778978 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-14 02:31:58.880680 | orchestrator | ok: [testbed-manager] 2026-02-14 02:31:58.880790 | orchestrator | 2026-02-14 02:31:58.880814 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-14 02:31:58.941049 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:31:58.941155 | orchestrator | 2026-02-14 02:31:58.941172 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-14 02:32:01.646371 | orchestrator | changed: [testbed-manager] 2026-02-14 02:32:01.646449 | orchestrator | 2026-02-14 02:32:01.646456 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-14 02:32:01.712443 | orchestrator | ok: [testbed-manager] 2026-02-14 02:32:01.712521 | orchestrator | 2026-02-14 02:32:01.712530 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-14 02:32:01.712536 | orchestrator | 2026-02-14 02:32:01.712542 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-14 02:32:01.921007 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:32:01.921108 | orchestrator | 2026-02-14 02:32:01.921122 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-14 02:33:01.982596 | orchestrator | Pausing for 60 seconds 2026-02-14 02:33:01.982701 | orchestrator | changed: [testbed-manager] 2026-02-14 02:33:01.982713 | orchestrator | 2026-02-14 02:33:01.982721 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-14 02:33:05.312909 | orchestrator | changed: [testbed-manager] 2026-02-14 02:33:05.313018 | orchestrator | 2026-02-14 02:33:05.313032 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-14 02:34:07.599836 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-14 02:34:07.599974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-14 02:34:07.600033 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-14 02:34:07.600057 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:07.600072 | orchestrator | 2026-02-14 02:34:07.600084 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-14 02:34:19.278322 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:19.278429 | orchestrator | 2026-02-14 02:34:19.278469 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-14 02:34:19.374912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-14 02:34:19.374990 | orchestrator | 2026-02-14 02:34:19.375001 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-14 02:34:19.375009 | orchestrator | 2026-02-14 02:34:19.375016 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-14 02:34:19.437648 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:34:19.437779 | orchestrator | 2026-02-14 02:34:19.437812 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-14 02:34:19.523965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-14 02:34:19.524060 | orchestrator | 2026-02-14 02:34:19.524075 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-14 02:34:20.440109 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:20.440202 | orchestrator | 2026-02-14 02:34:20.440213 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-14 02:34:24.008164 | orchestrator | ok: [testbed-manager] 2026-02-14 02:34:24.008305 | orchestrator | 2026-02-14 02:34:24.008335 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-14 02:34:24.096190 | orchestrator | ok: [testbed-manager] => { 2026-02-14 02:34:24.096266 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-14 02:34:24.096275 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-14 02:34:24.096283 | orchestrator | "Checking running containers against expected versions...", 2026-02-14 02:34:24.096291 | orchestrator | "", 2026-02-14 02:34:24.096298 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-14 02:34:24.096305 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-14 02:34:24.096313 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096319 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-14 02:34:24.096326 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096332 | orchestrator | "", 2026-02-14 02:34:24.096339 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-14 02:34:24.096366 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-14 02:34:24.096373 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096380 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-14 02:34:24.096386 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096392 | orchestrator | "", 2026-02-14 02:34:24.096399 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-14 02:34:24.096405 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-14 02:34:24.096411 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096418 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-14 02:34:24.096424 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096430 | orchestrator | "", 2026-02-14 02:34:24.096436 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-14 02:34:24.096443 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-14 02:34:24.096470 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096477 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-14 02:34:24.096483 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096490 | orchestrator | "", 2026-02-14 02:34:24.096497 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-14 02:34:24.096504 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-14 02:34:24.096510 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096516 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-14 02:34:24.096522 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096528 | orchestrator | "", 2026-02-14 02:34:24.096535 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-14 02:34:24.096541 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096547 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096553 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096560 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096566 | orchestrator | "", 2026-02-14 02:34:24.096572 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-14 02:34:24.096578 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-14 02:34:24.096584 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096591 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-14 02:34:24.096598 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096604 | orchestrator | "", 2026-02-14 02:34:24.096610 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-14 02:34:24.096616 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-14 02:34:24.096623 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096629 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-14 02:34:24.096635 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096641 | orchestrator | "", 2026-02-14 02:34:24.096647 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-14 02:34:24.096654 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-14 02:34:24.096660 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096666 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-14 02:34:24.096673 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096679 | orchestrator | "", 2026-02-14 02:34:24.096685 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-14 02:34:24.096691 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-14 02:34:24.096697 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096704 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-14 02:34:24.096710 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096716 | orchestrator | "", 2026-02-14 02:34:24.096723 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-14 02:34:24.096734 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096741 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096749 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096756 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096764 | orchestrator | "", 2026-02-14 02:34:24.096771 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-14 02:34:24.096778 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096785 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096792 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096799 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096807 | orchestrator | "", 2026-02-14 02:34:24.096815 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-14 02:34:24.096822 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096829 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096836 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096844 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096851 | orchestrator | "", 2026-02-14 02:34:24.096859 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-14 02:34:24.096866 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096873 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096881 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096901 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096908 | orchestrator | "", 2026-02-14 02:34:24.096916 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-14 02:34:24.096923 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096936 | orchestrator | " Enabled: true", 2026-02-14 02:34:24.096943 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-14 02:34:24.096950 | orchestrator | " Status: ✅ MATCH", 2026-02-14 02:34:24.096958 | orchestrator | "", 2026-02-14 02:34:24.096965 | orchestrator | "=== Summary ===", 2026-02-14 02:34:24.096973 | orchestrator | "Errors (version mismatches): 0", 2026-02-14 02:34:24.096981 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-14 02:34:24.096988 | orchestrator | "", 2026-02-14 02:34:24.096995 | orchestrator | "✅ All running containers match expected versions!" 2026-02-14 02:34:24.097003 | orchestrator | ] 2026-02-14 02:34:24.097011 | orchestrator | } 2026-02-14 02:34:24.097018 | orchestrator | 2026-02-14 02:34:24.097028 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-14 02:34:24.167316 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:34:24.167430 | orchestrator | 2026-02-14 02:34:24.167524 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:34:24.167563 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-14 02:34:24.167583 | orchestrator | 2026-02-14 02:34:24.310835 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 02:34:24.310939 | orchestrator | + deactivate 2026-02-14 02:34:24.310956 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-14 02:34:24.310969 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 02:34:24.310980 | orchestrator | + export PATH 2026-02-14 02:34:24.310992 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-14 02:34:24.311003 | orchestrator | + '[' -n '' ']' 2026-02-14 02:34:24.311014 | orchestrator | + hash -r 2026-02-14 02:34:24.311026 | orchestrator | + '[' -n '' ']' 2026-02-14 02:34:24.311036 | orchestrator | + unset VIRTUAL_ENV 2026-02-14 02:34:24.311047 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-14 02:34:24.311059 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-14 02:34:24.311070 | orchestrator | + unset -f deactivate 2026-02-14 02:34:24.311081 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-14 02:34:24.318171 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 02:34:24.318265 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-14 02:34:24.318309 | orchestrator | + local max_attempts=60 2026-02-14 02:34:24.318322 | orchestrator | + local name=ceph-ansible 2026-02-14 02:34:24.318333 | orchestrator | + local attempt_num=1 2026-02-14 02:34:24.318599 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:34:24.352050 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:34:24.352174 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-14 02:34:24.352201 | orchestrator | + local max_attempts=60 2026-02-14 02:34:24.352264 | orchestrator | + local name=kolla-ansible 2026-02-14 02:34:24.352284 | orchestrator | + local attempt_num=1 2026-02-14 02:34:24.352430 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-14 02:34:24.393433 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:34:24.393612 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-14 02:34:24.393629 | orchestrator | + local max_attempts=60 2026-02-14 02:34:24.393641 | orchestrator | + local name=osism-ansible 2026-02-14 02:34:24.393652 | orchestrator | + local attempt_num=1 2026-02-14 02:34:24.394140 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-14 02:34:24.432492 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:34:24.432575 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-14 02:34:24.432595 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-14 02:34:25.218830 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-14 02:34:25.425301 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-14 02:34:25.425387 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-14 02:34:25.425399 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-14 02:34:25.425408 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-14 02:34:25.425419 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-14 02:34:25.425446 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-14 02:34:25.425494 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-14 02:34:25.425503 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-14 02:34:25.425511 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-14 02:34:25.425519 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-14 02:34:25.425527 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-14 02:34:25.425535 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-14 02:34:25.425544 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-14 02:34:25.425572 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-14 02:34:25.425581 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-14 02:34:25.425590 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-14 02:34:25.431987 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-14 02:34:25.497818 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 02:34:25.497902 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-14 02:34:25.501722 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-14 02:34:38.008929 | orchestrator | 2026-02-14 02:34:38 | INFO  | Task 1159791c-2004-4916-bfd6-e1f3e138d6df (resolvconf) was prepared for execution. 2026-02-14 02:34:38.009017 | orchestrator | 2026-02-14 02:34:38 | INFO  | It takes a moment until task 1159791c-2004-4916-bfd6-e1f3e138d6df (resolvconf) has been started and output is visible here. 2026-02-14 02:34:53.942158 | orchestrator | 2026-02-14 02:34:53.942291 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-14 02:34:53.942320 | orchestrator | 2026-02-14 02:34:53.942341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:34:53.942361 | orchestrator | Saturday 14 February 2026 02:34:42 +0000 (0:00:00.158) 0:00:00.158 ***** 2026-02-14 02:34:53.942378 | orchestrator | ok: [testbed-manager] 2026-02-14 02:34:53.942396 | orchestrator | 2026-02-14 02:34:53.942413 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-14 02:34:53.942433 | orchestrator | Saturday 14 February 2026 02:34:46 +0000 (0:00:04.285) 0:00:04.444 ***** 2026-02-14 02:34:53.942452 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:34:53.942468 | orchestrator | 2026-02-14 02:34:53.942507 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-14 02:34:53.942520 | orchestrator | Saturday 14 February 2026 02:34:46 +0000 (0:00:00.076) 0:00:04.520 ***** 2026-02-14 02:34:53.942531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-14 02:34:53.942544 | orchestrator | 2026-02-14 02:34:53.942555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-14 02:34:53.942566 | orchestrator | Saturday 14 February 2026 02:34:47 +0000 (0:00:00.096) 0:00:04.617 ***** 2026-02-14 02:34:53.942597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 02:34:53.942609 | orchestrator | 2026-02-14 02:34:53.942621 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-14 02:34:53.942633 | orchestrator | Saturday 14 February 2026 02:34:47 +0000 (0:00:00.115) 0:00:04.733 ***** 2026-02-14 02:34:53.942645 | orchestrator | ok: [testbed-manager] 2026-02-14 02:34:53.942658 | orchestrator | 2026-02-14 02:34:53.942670 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-14 02:34:53.942683 | orchestrator | Saturday 14 February 2026 02:34:48 +0000 (0:00:01.216) 0:00:05.949 ***** 2026-02-14 02:34:53.942695 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:34:53.942707 | orchestrator | 2026-02-14 02:34:53.942719 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-14 02:34:53.942732 | orchestrator | Saturday 14 February 2026 02:34:48 +0000 (0:00:00.065) 0:00:06.014 ***** 2026-02-14 02:34:53.942776 | orchestrator | ok: [testbed-manager] 2026-02-14 02:34:53.942796 | orchestrator | 2026-02-14 02:34:53.942816 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-14 02:34:53.942834 | orchestrator | Saturday 14 February 2026 02:34:48 +0000 (0:00:00.546) 0:00:06.561 ***** 2026-02-14 02:34:53.942851 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:34:53.942922 | orchestrator | 2026-02-14 02:34:53.942935 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-14 02:34:53.942949 | orchestrator | Saturday 14 February 2026 02:34:49 +0000 (0:00:00.094) 0:00:06.656 ***** 2026-02-14 02:34:53.942960 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:53.942971 | orchestrator | 2026-02-14 02:34:53.942987 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-14 02:34:53.943004 | orchestrator | Saturday 14 February 2026 02:34:49 +0000 (0:00:00.609) 0:00:07.265 ***** 2026-02-14 02:34:53.943023 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:53.943041 | orchestrator | 2026-02-14 02:34:53.943061 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-14 02:34:53.943080 | orchestrator | Saturday 14 February 2026 02:34:51 +0000 (0:00:01.365) 0:00:08.631 ***** 2026-02-14 02:34:53.943095 | orchestrator | ok: [testbed-manager] 2026-02-14 02:34:53.943107 | orchestrator | 2026-02-14 02:34:53.943117 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-14 02:34:53.943128 | orchestrator | Saturday 14 February 2026 02:34:52 +0000 (0:00:01.134) 0:00:09.765 ***** 2026-02-14 02:34:53.943139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-14 02:34:53.943150 | orchestrator | 2026-02-14 02:34:53.943161 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-14 02:34:53.943172 | orchestrator | Saturday 14 February 2026 02:34:52 +0000 (0:00:00.101) 0:00:09.867 ***** 2026-02-14 02:34:53.943182 | orchestrator | changed: [testbed-manager] 2026-02-14 02:34:53.943193 | orchestrator | 2026-02-14 02:34:53.943204 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:34:53.943216 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 02:34:53.943227 | orchestrator | 2026-02-14 02:34:53.943238 | orchestrator | 2026-02-14 02:34:53.943248 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:34:53.943259 | orchestrator | Saturday 14 February 2026 02:34:53 +0000 (0:00:01.352) 0:00:11.219 ***** 2026-02-14 02:34:53.943270 | orchestrator | =============================================================================== 2026-02-14 02:34:53.943280 | orchestrator | Gathering Facts --------------------------------------------------------- 4.29s 2026-02-14 02:34:53.943291 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.37s 2026-02-14 02:34:53.943302 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.35s 2026-02-14 02:34:53.943312 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.22s 2026-02-14 02:34:53.943323 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.13s 2026-02-14 02:34:53.943334 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2026-02-14 02:34:53.943366 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-02-14 02:34:53.943378 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.12s 2026-02-14 02:34:53.943388 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-02-14 02:34:53.943399 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-02-14 02:34:53.943409 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-02-14 02:34:53.943420 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-02-14 02:34:53.943443 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-14 02:34:54.343979 | orchestrator | + osism apply sshconfig 2026-02-14 02:35:06.511871 | orchestrator | 2026-02-14 02:35:06 | INFO  | Task 30edeea1-d7a8-4ace-8246-67b91c145f57 (sshconfig) was prepared for execution. 2026-02-14 02:35:06.511990 | orchestrator | 2026-02-14 02:35:06 | INFO  | It takes a moment until task 30edeea1-d7a8-4ace-8246-67b91c145f57 (sshconfig) has been started and output is visible here. 2026-02-14 02:35:19.692479 | orchestrator | 2026-02-14 02:35:19.692698 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-14 02:35:19.692726 | orchestrator | 2026-02-14 02:35:19.692772 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-14 02:35:19.692793 | orchestrator | Saturday 14 February 2026 02:35:11 +0000 (0:00:00.167) 0:00:00.167 ***** 2026-02-14 02:35:19.692812 | orchestrator | ok: [testbed-manager] 2026-02-14 02:35:19.692833 | orchestrator | 2026-02-14 02:35:19.692848 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-14 02:35:19.692859 | orchestrator | Saturday 14 February 2026 02:35:12 +0000 (0:00:00.565) 0:00:00.732 ***** 2026-02-14 02:35:19.692870 | orchestrator | changed: [testbed-manager] 2026-02-14 02:35:19.692882 | orchestrator | 2026-02-14 02:35:19.692893 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-14 02:35:19.692904 | orchestrator | Saturday 14 February 2026 02:35:12 +0000 (0:00:00.546) 0:00:01.278 ***** 2026-02-14 02:35:19.692915 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-14 02:35:19.692926 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-14 02:35:19.692937 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-14 02:35:19.692947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-14 02:35:19.692958 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-14 02:35:19.692969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-14 02:35:19.692980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-14 02:35:19.693094 | orchestrator | 2026-02-14 02:35:19.693115 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-14 02:35:19.693137 | orchestrator | Saturday 14 February 2026 02:35:18 +0000 (0:00:06.111) 0:00:07.390 ***** 2026-02-14 02:35:19.693158 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:35:19.693178 | orchestrator | 2026-02-14 02:35:19.693198 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-14 02:35:19.693219 | orchestrator | Saturday 14 February 2026 02:35:18 +0000 (0:00:00.079) 0:00:07.470 ***** 2026-02-14 02:35:19.693240 | orchestrator | changed: [testbed-manager] 2026-02-14 02:35:19.693259 | orchestrator | 2026-02-14 02:35:19.693281 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:35:19.693302 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:35:19.693323 | orchestrator | 2026-02-14 02:35:19.693343 | orchestrator | 2026-02-14 02:35:19.693362 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:35:19.693381 | orchestrator | Saturday 14 February 2026 02:35:19 +0000 (0:00:00.597) 0:00:08.067 ***** 2026-02-14 02:35:19.693400 | orchestrator | =============================================================================== 2026-02-14 02:35:19.693420 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.11s 2026-02-14 02:35:19.693439 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-02-14 02:35:19.693453 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-02-14 02:35:19.693464 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-02-14 02:35:19.693546 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-14 02:35:20.059195 | orchestrator | + osism apply known-hosts 2026-02-14 02:35:32.181200 | orchestrator | 2026-02-14 02:35:32 | INFO  | Task b7029ed6-6038-4c98-a246-3972c5cd1a0d (known-hosts) was prepared for execution. 2026-02-14 02:35:32.181319 | orchestrator | 2026-02-14 02:35:32 | INFO  | It takes a moment until task b7029ed6-6038-4c98-a246-3972c5cd1a0d (known-hosts) has been started and output is visible here. 2026-02-14 02:35:50.176828 | orchestrator | 2026-02-14 02:35:50.176943 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-14 02:35:50.176959 | orchestrator | 2026-02-14 02:35:50.176972 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-14 02:35:50.176984 | orchestrator | Saturday 14 February 2026 02:35:36 +0000 (0:00:00.166) 0:00:00.166 ***** 2026-02-14 02:35:50.176996 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-14 02:35:50.177008 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-14 02:35:50.177019 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-14 02:35:50.177030 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-14 02:35:50.177040 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-14 02:35:50.177051 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-14 02:35:50.177062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-14 02:35:50.177073 | orchestrator | 2026-02-14 02:35:50.177084 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-14 02:35:50.177096 | orchestrator | Saturday 14 February 2026 02:35:42 +0000 (0:00:06.296) 0:00:06.463 ***** 2026-02-14 02:35:50.177108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-14 02:35:50.177121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-14 02:35:50.177132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-14 02:35:50.177143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-14 02:35:50.177154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-14 02:35:50.177176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-14 02:35:50.177187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-14 02:35:50.177198 | orchestrator | 2026-02-14 02:35:50.177209 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177220 | orchestrator | Saturday 14 February 2026 02:35:43 +0000 (0:00:00.204) 0:00:06.668 ***** 2026-02-14 02:35:50.177239 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCujybwLvKv2qezfvPQIelKGtoxZV5x5b5MNKMw3QitfoM0SnD0LruROOKc8r3RG//B4KS2N34PqBjuCkSqSLLPL2s3k72D1P/ZJZ7NHJhnzAOWom7uj7Lg3MpCvt9+XaN4+9VzttkFS4TblP0qSALqCVI2pzyU7KjhGFw3Kt4iRdmOj2wdzvUUTb10fbCcRaxIzm80NS6EpRDPcyhad7qrYNUtvdopQUcxY09wifLoS/YemsNM7NDpZleVkEgWc1bbgEBSFu+j6dRUO0KfOsoVyf8DD1BaDzazfVN+TSjQb2tMeLxmTN3lGsxithpeqkXhMZV+BUSrDCYEwLmZtr+1ImvR0YDspQEEJ4SZphDM9pGrT5a7BnnMlQB4NcC93G2fpJfrkWCR6IWxn099WT7erK3mhnM9K8t6p6fJpVYXWceYDjj6Tjmn5+AZFQDRAui6DgdTAtpGHEf24yIuiavVsy/nb8fJ3RP3DK49ubtr83KhF+9O+ZmvUA4aeRSSm5E=) 2026-02-14 02:35:50.177275 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeKmQZ/aAPBosEQ8CecME5t8V3wc6gmkaSg80CvK56qzRO3H7M7n7iN15DGm2Jo9Dtj9o1CIjIXzFX1AQl6ZV0=) 2026-02-14 02:35:50.177288 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA08eglV3mSNWznpeP1DmeQduvqhYqlSYE2wE8mV+OhT) 2026-02-14 02:35:50.177301 | orchestrator | 2026-02-14 02:35:50.177312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177323 | orchestrator | Saturday 14 February 2026 02:35:44 +0000 (0:00:01.258) 0:00:07.926 ***** 2026-02-14 02:35:50.177352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK96ADF0LvWZSkUkBkAG950xmzaC0H0on8IY7Mz/ofGV3F4wSVSRJZFkf1KiRymsaBRHnJHiqjX74R5Vnc9AOUZwW961M0USf466ih5/S0csCmUdMLyLQIDsAvCkxaMg0lJ1uOR0WBDvwBwq4CxiwVX6vIgM7NzHBcx7hUIe/oWbJaAolsq9o7TEqiXMJork4FkoQU+N1mT1Cd0Ylg3OhQH7PuPtDmuZexoXNwk0RtRHSfWSC/SCd2zmWJ16no/9T6Rz+XYbrhmz0NSWwAMjZMIvtgjB3DQdwrs500gblPaPglran00TjvszZEQNQtCEox9Z1alebY6DgMA89uiNULWMOuS0U+MsqVYLJOGJ0BC7PWipYvwnu1SStSCnJrnynvKlE4C/aAUxO097ycF3p+MWht2ZFZUwFP5ObdAQgQtxLoLMsFlQDUDeh495WqhiLgQDxZkuoDvUXmUfsLEfT0gYfKja5l9HYxYGz6X91Pl+mfASd4il4d8/pGvueC1a8=) 2026-02-14 02:35:50.177366 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBopv4W2Bm9Tk5PU9B0lVIWmAQUxFz1Rjx85WvWn75OwOqKABBxLhorKYLPuEeCkXVYOkXp9aetVV12+h1dy8K0=) 2026-02-14 02:35:50.177379 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ9YDdd2jQ03lGZI0vNLLMd6c0gRWA+mljogLl9dzX9j) 2026-02-14 02:35:50.177392 | orchestrator | 2026-02-14 02:35:50.177404 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177416 | orchestrator | Saturday 14 February 2026 02:35:45 +0000 (0:00:01.162) 0:00:09.088 ***** 2026-02-14 02:35:50.177429 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLm6xt/zC/mhgCgb5s+T47M0uH7SVrUVAZitx834Mt2pYWGk2JE6XwJRR/ndkhggx0WJRZCAKxqkgS5yCWUmVtJWnxEZSOP9AW1XK8rh1MW0CqKnoiegHBoinpbG0+WDA4WC6X/L4j3vtJDF9hBYpQ1k30FpHNFvzlGwbut0NrcCdVvhIIQu/WiT2ieZJ1REH9N9k4y6oiswH562U3n4Zt8c34Tx2ljehq2xAvO68+GeBqrrj5+yymz8VsdQ7WSelube8bVuSMQaOQteN2q3xmmfR3LVfM4sVq3fkUGZ/oi8Gzleo5N4xe+X1emAbydFnyhvo1ok2HHuHn/du+qBX2xl0aJjX/avcdBZWIIujqwPW+fytJV6TiUNmn+z6jPRrPrvBIGT/lBjzkMUFagL8ChVC6mm5oX43FzKHfZet962uwpR59Jv6CCXesgrv1ZCIqeTaqtVCqEz/y7sqO8wqHVMUinCGm2oO8gCn/EqnWz/422CMC1xsL82ZQLhLZEr8=) 2026-02-14 02:35:50.177441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLq6AP7Xbfii4ZYtdDD4kbXKhovwKToV55kGnvHoOr//GP2DoVctFpi4Hcmfg6JCGZv+8Mr9C9YcQ87flLbbyl0=) 2026-02-14 02:35:50.177454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOg69bbBZ4TuP/uMZXIEjQxt6jcN9EiewHPTP98QvFGx) 2026-02-14 02:35:50.177466 | orchestrator | 2026-02-14 02:35:50.177479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177491 | orchestrator | Saturday 14 February 2026 02:35:46 +0000 (0:00:01.147) 0:00:10.236 ***** 2026-02-14 02:35:50.177504 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNHFG/WvhVUoIiZhqVuPsggXmAhI3qzBk8OnLsGYbD1MiXfbKS26quCM9bPj/oFZbmx+6I8IHYE1a26BicjMWkHY/exFEEhgOs5cqJWvQoje1MTtjvVCiDSPf15pMpceqV9k4s2szdG6Yhw8D9M4dudT7X3StMLYIk7oVguNxUKj/JFwCTBJo0h6H4JVD67ZAufOZEGjZusht++BQrFaTYIjeE1bbFKNcYVF/VsNNQGYkvW1XD5DHAmyZN8oD10nbRrWwNuT741PpFwH0YH3XFjrIadi+1KXL+akPyhvKtzgZX+5hHUIsfWFdxkeoJ85HDQMQlhOjg5gIQK17lvuOkBlSka1EVFIFj4y0E3bW9eHpGwj+e+4ArWR9I4MrF/hLM9pGDHf4uavSrZcRsuRE/6ZBre0CpsbcBwVV2wDXdj4umTWExZUhi+sQrlZDOrSVBXwjTcGqp7cHLpOJu26GwnCltnE/j7obx0XZUhYVBjKumcTUTchzr0Ih/Bq+iW8U=) 2026-02-14 02:35:50.177525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLKG3+CIzhVQnJcYbZJogoPEjgbwXtkc6703M7TBxx7b75sJ2P7bt4Wf0SrvxW0qDF08LfsYaBmw8D0KE2KWw4=) 2026-02-14 02:35:50.177538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOnVmr0cxQ9c5p5XrwigaNEEfU7onPxMtZVtQUyG/amf) 2026-02-14 02:35:50.177550 | orchestrator | 2026-02-14 02:35:50.177592 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177605 | orchestrator | Saturday 14 February 2026 02:35:47 +0000 (0:00:01.166) 0:00:11.403 ***** 2026-02-14 02:35:50.177704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRuhTfHOcZA46ZSinrJCX+9WjVrqZOj1OZ5TbKGEux/ECZbM6G6+MQJ2+N1DuVnjtb2Ci4WnwVQkW/t6p3ByYSVZblXOIHoikcrayiKlV8WsaozuzCIdmgJuG8NjktB6hTWG0vwRce2wI5ZWCjFXThS+eCZuFtus0oCxyJGxV2ZbKYOuB/WEOkwLkwTeKz9/5iCFI06+NpI4pH9XU1nQAm0CvXV7XCvnwOUXr/FKyaB2qjky5xV36JVJPT1phrukvCbfAd6SoFPoTDHKsWjoe9SXvIBNSCdWclPJZhcqphNCF3DslTVIXq4jV0GsQMJ6/s9y/tOnWsdNeKp2FfaAEfJ1+Z9Jdu0DgOGPlPZrzpwUo4tlWhNFWKxOTQM8nhhruSYG+gq86es0gwumQL+qTWLvojSaI8sBmMgqw5+Ut2obwIyes8aaxhsQ1bMQ4NoeO0Ci/VvzJGtRRcesnwUVdYaLtVEu2abRccFLQceK0q1fCKwmV0ffrw+htbIfA3exs=) 2026-02-14 02:35:50.177726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDKwCJ1K1bwZCtVx0TDFCW7YBuEqwhmhtODQ5xKccvEsLtJzuzR8nwNB/vumQKF6OsQ8fjrMVJvToEEsCBNDEWo=) 2026-02-14 02:35:50.177744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZRTTCMFnlId9WzXQB8nTPM3z01ka3tjz27Yc55VHha) 2026-02-14 02:35:50.177781 | orchestrator | 2026-02-14 02:35:50.177815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:35:50.177831 | orchestrator | Saturday 14 February 2026 02:35:49 +0000 (0:00:01.135) 0:00:12.539 ***** 2026-02-14 02:35:50.177863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiiXhqwOGfoSCFKxIPHmKWVfyFKaKfOZ7LZTqncXpVNc6/idn2VpT4uD8D2kiKtqw3apHkef2K6A9nemhCWa0BDtQN8qx3zw5TDJ0szqfcbysfxebFQl20Pt0J3S75lSEk2hesET5fQk69H9GGiVJp1jMrI3opZBgaHSr4vPR340zRYTcH+6x8Xmm7UukU8ShbdtJB9LRTFwXFiBQ4YBVBdCUc+vw4BzZT4AF7fftjYobxNOcTcYx1L318/Fgxh91JKTgjGEN1Rqo5A3KVI/UkQJ9vf5yM0DoIC3rEXS58/uCtxqIlx/78N8IBRmCsBiLAZGhqywFAShZeYcAtuilDCUW8Y4kShl0wFhajl3yR7fdPtlKNt/DpToyWf8m92XWDrxMzhx7YQxpGeZIerxzmUjcBs0iJorZK+KfFgycfkDQ/bLfvyQwChUoNsHHpRCfLgL3VhtJbJS1dq6RbaVGz/4e5xM3MT4+5vopXufxKSPlByfEc3rx9bogjX5e3OC0=) 2026-02-14 02:36:03.018151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGoR9E25da5oK345M5OrZ81CliDoL6SlYNWskzdyPxgZvAZsHOi1oS+ouwxtA6lAlX7yUiOV1LO8QEYwrNb/T1E=) 2026-02-14 02:36:03.018264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFzylB9WT3g/QLiZGCh/7FOa4uIqkOx8fd6EYAaFMVZO) 2026-02-14 02:36:03.018277 | orchestrator | 2026-02-14 02:36:03.018285 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:03.018293 | orchestrator | Saturday 14 February 2026 02:35:50 +0000 (0:00:01.152) 0:00:13.691 ***** 2026-02-14 02:36:03.018302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwIsSIMsO9peSbDgNvmlTdreBbn+QOp5lA3gJnIgiTc1TI0jUyKLg8vkzhwywr2Yc19ysQ7BoPn60hbzcJWKPuAF0o+GMJR8RIUnA2VXr72q2Ke1Xws2NGJAvaNaVEhiEmf4lyurFw6vtNIWSA5h6KaEDVDmBTf4MbZlZXhqy/Vidcw7aB4t0tzQpR2Hk7gQQjsPqDXDKdMet0MZoDxJlGUc8pUEzOGbMgK5eQIvGNLEiT5BQJiKxGRS7oTf+ltQe30NeP2HEX5Jci8Sc6aLhFmuR9iE/HgFvmzPEwfM+UVLZcLuw0YtG+UUIZn0B43dZDzRbfaRGDYNRyFjzT1apFvnMPYmw+5g3XWRMIUw7PsWTdkyBjK+MpdTgQt2bHZR3mklYDFnpxbJBycXclGlg4k+/SjNxkAamW5HkAipGMcyDb7cm7UxdeX44YxjUO1vcO9p13Pvpks9djuxfY9XyG0cngBnIpniY1LZiCAwrVWWfstGd4D5Jxu5YNmG15W70=) 2026-02-14 02:36:03.018311 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBSx7u8OAsfGqhXLANjGKxX+Utr0KdkpfXygTmXZ92uXyesJt2mYeF6Y7oi3bggcmRYpGRAigmxb6VDL72jRlnE=) 2026-02-14 02:36:03.018337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoB69e5jWXk0mMr1xQAMPsqwFPWwTjsfBd6IXBoaUQi) 2026-02-14 02:36:03.018344 | orchestrator | 2026-02-14 02:36:03.018350 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-14 02:36:03.018358 | orchestrator | Saturday 14 February 2026 02:35:51 +0000 (0:00:01.141) 0:00:14.832 ***** 2026-02-14 02:36:03.018365 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-14 02:36:03.018371 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-14 02:36:03.018377 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-14 02:36:03.018383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-14 02:36:03.018388 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-14 02:36:03.018394 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-14 02:36:03.018400 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-14 02:36:03.018405 | orchestrator | 2026-02-14 02:36:03.018411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-14 02:36:03.018419 | orchestrator | Saturday 14 February 2026 02:35:56 +0000 (0:00:05.640) 0:00:20.472 ***** 2026-02-14 02:36:03.018426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-14 02:36:03.018435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-14 02:36:03.018440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-14 02:36:03.018446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-14 02:36:03.018452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-14 02:36:03.018458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-14 02:36:03.018464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-14 02:36:03.018471 | orchestrator | 2026-02-14 02:36:03.018477 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:03.018483 | orchestrator | Saturday 14 February 2026 02:35:57 +0000 (0:00:00.237) 0:00:20.710 ***** 2026-02-14 02:36:03.018507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCujybwLvKv2qezfvPQIelKGtoxZV5x5b5MNKMw3QitfoM0SnD0LruROOKc8r3RG//B4KS2N34PqBjuCkSqSLLPL2s3k72D1P/ZJZ7NHJhnzAOWom7uj7Lg3MpCvt9+XaN4+9VzttkFS4TblP0qSALqCVI2pzyU7KjhGFw3Kt4iRdmOj2wdzvUUTb10fbCcRaxIzm80NS6EpRDPcyhad7qrYNUtvdopQUcxY09wifLoS/YemsNM7NDpZleVkEgWc1bbgEBSFu+j6dRUO0KfOsoVyf8DD1BaDzazfVN+TSjQb2tMeLxmTN3lGsxithpeqkXhMZV+BUSrDCYEwLmZtr+1ImvR0YDspQEEJ4SZphDM9pGrT5a7BnnMlQB4NcC93G2fpJfrkWCR6IWxn099WT7erK3mhnM9K8t6p6fJpVYXWceYDjj6Tjmn5+AZFQDRAui6DgdTAtpGHEf24yIuiavVsy/nb8fJ3RP3DK49ubtr83KhF+9O+ZmvUA4aeRSSm5E=) 2026-02-14 02:36:03.018515 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeKmQZ/aAPBosEQ8CecME5t8V3wc6gmkaSg80CvK56qzRO3H7M7n7iN15DGm2Jo9Dtj9o1CIjIXzFX1AQl6ZV0=) 2026-02-14 02:36:03.018534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA08eglV3mSNWznpeP1DmeQduvqhYqlSYE2wE8mV+OhT) 2026-02-14 02:36:03.018541 | orchestrator | 2026-02-14 02:36:03.018547 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:03.018554 | orchestrator | Saturday 14 February 2026 02:35:58 +0000 (0:00:01.215) 0:00:21.926 ***** 2026-02-14 02:36:03.018560 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBopv4W2Bm9Tk5PU9B0lVIWmAQUxFz1Rjx85WvWn75OwOqKABBxLhorKYLPuEeCkXVYOkXp9aetVV12+h1dy8K0=) 2026-02-14 02:36:03.018567 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK96ADF0LvWZSkUkBkAG950xmzaC0H0on8IY7Mz/ofGV3F4wSVSRJZFkf1KiRymsaBRHnJHiqjX74R5Vnc9AOUZwW961M0USf466ih5/S0csCmUdMLyLQIDsAvCkxaMg0lJ1uOR0WBDvwBwq4CxiwVX6vIgM7NzHBcx7hUIe/oWbJaAolsq9o7TEqiXMJork4FkoQU+N1mT1Cd0Ylg3OhQH7PuPtDmuZexoXNwk0RtRHSfWSC/SCd2zmWJ16no/9T6Rz+XYbrhmz0NSWwAMjZMIvtgjB3DQdwrs500gblPaPglran00TjvszZEQNQtCEox9Z1alebY6DgMA89uiNULWMOuS0U+MsqVYLJOGJ0BC7PWipYvwnu1SStSCnJrnynvKlE4C/aAUxO097ycF3p+MWht2ZFZUwFP5ObdAQgQtxLoLMsFlQDUDeh495WqhiLgQDxZkuoDvUXmUfsLEfT0gYfKja5l9HYxYGz6X91Pl+mfASd4il4d8/pGvueC1a8=) 2026-02-14 02:36:03.018574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ9YDdd2jQ03lGZI0vNLLMd6c0gRWA+mljogLl9dzX9j) 2026-02-14 02:36:03.018580 | orchestrator | 2026-02-14 02:36:03.018612 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:03.018618 | orchestrator | Saturday 14 February 2026 02:35:59 +0000 (0:00:01.186) 0:00:23.113 ***** 2026-02-14 02:36:03.018624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOg69bbBZ4TuP/uMZXIEjQxt6jcN9EiewHPTP98QvFGx) 2026-02-14 02:36:03.018630 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLm6xt/zC/mhgCgb5s+T47M0uH7SVrUVAZitx834Mt2pYWGk2JE6XwJRR/ndkhggx0WJRZCAKxqkgS5yCWUmVtJWnxEZSOP9AW1XK8rh1MW0CqKnoiegHBoinpbG0+WDA4WC6X/L4j3vtJDF9hBYpQ1k30FpHNFvzlGwbut0NrcCdVvhIIQu/WiT2ieZJ1REH9N9k4y6oiswH562U3n4Zt8c34Tx2ljehq2xAvO68+GeBqrrj5+yymz8VsdQ7WSelube8bVuSMQaOQteN2q3xmmfR3LVfM4sVq3fkUGZ/oi8Gzleo5N4xe+X1emAbydFnyhvo1ok2HHuHn/du+qBX2xl0aJjX/avcdBZWIIujqwPW+fytJV6TiUNmn+z6jPRrPrvBIGT/lBjzkMUFagL8ChVC6mm5oX43FzKHfZet962uwpR59Jv6CCXesgrv1ZCIqeTaqtVCqEz/y7sqO8wqHVMUinCGm2oO8gCn/EqnWz/422CMC1xsL82ZQLhLZEr8=) 2026-02-14 02:36:03.018645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLq6AP7Xbfii4ZYtdDD4kbXKhovwKToV55kGnvHoOr//GP2DoVctFpi4Hcmfg6JCGZv+8Mr9C9YcQ87flLbbyl0=) 2026-02-14 02:36:03.018652 | orchestrator | 2026-02-14 02:36:03.018658 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:03.018665 | orchestrator | Saturday 14 February 2026 02:36:00 +0000 (0:00:01.201) 0:00:24.315 ***** 2026-02-14 02:36:03.018672 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNHFG/WvhVUoIiZhqVuPsggXmAhI3qzBk8OnLsGYbD1MiXfbKS26quCM9bPj/oFZbmx+6I8IHYE1a26BicjMWkHY/exFEEhgOs5cqJWvQoje1MTtjvVCiDSPf15pMpceqV9k4s2szdG6Yhw8D9M4dudT7X3StMLYIk7oVguNxUKj/JFwCTBJo0h6H4JVD67ZAufOZEGjZusht++BQrFaTYIjeE1bbFKNcYVF/VsNNQGYkvW1XD5DHAmyZN8oD10nbRrWwNuT741PpFwH0YH3XFjrIadi+1KXL+akPyhvKtzgZX+5hHUIsfWFdxkeoJ85HDQMQlhOjg5gIQK17lvuOkBlSka1EVFIFj4y0E3bW9eHpGwj+e+4ArWR9I4MrF/hLM9pGDHf4uavSrZcRsuRE/6ZBre0CpsbcBwVV2wDXdj4umTWExZUhi+sQrlZDOrSVBXwjTcGqp7cHLpOJu26GwnCltnE/j7obx0XZUhYVBjKumcTUTchzr0Ih/Bq+iW8U=) 2026-02-14 02:36:03.018682 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHLKG3+CIzhVQnJcYbZJogoPEjgbwXtkc6703M7TBxx7b75sJ2P7bt4Wf0SrvxW0qDF08LfsYaBmw8D0KE2KWw4=) 2026-02-14 02:36:03.018693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOnVmr0cxQ9c5p5XrwigaNEEfU7onPxMtZVtQUyG/amf) 2026-02-14 02:36:08.179250 | orchestrator | 2026-02-14 02:36:08.179370 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:08.179391 | orchestrator | Saturday 14 February 2026 02:36:03 +0000 (0:00:02.222) 0:00:26.537 ***** 2026-02-14 02:36:08.179407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDKwCJ1K1bwZCtVx0TDFCW7YBuEqwhmhtODQ5xKccvEsLtJzuzR8nwNB/vumQKF6OsQ8fjrMVJvToEEsCBNDEWo=) 2026-02-14 02:36:08.179425 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRuhTfHOcZA46ZSinrJCX+9WjVrqZOj1OZ5TbKGEux/ECZbM6G6+MQJ2+N1DuVnjtb2Ci4WnwVQkW/t6p3ByYSVZblXOIHoikcrayiKlV8WsaozuzCIdmgJuG8NjktB6hTWG0vwRce2wI5ZWCjFXThS+eCZuFtus0oCxyJGxV2ZbKYOuB/WEOkwLkwTeKz9/5iCFI06+NpI4pH9XU1nQAm0CvXV7XCvnwOUXr/FKyaB2qjky5xV36JVJPT1phrukvCbfAd6SoFPoTDHKsWjoe9SXvIBNSCdWclPJZhcqphNCF3DslTVIXq4jV0GsQMJ6/s9y/tOnWsdNeKp2FfaAEfJ1+Z9Jdu0DgOGPlPZrzpwUo4tlWhNFWKxOTQM8nhhruSYG+gq86es0gwumQL+qTWLvojSaI8sBmMgqw5+Ut2obwIyes8aaxhsQ1bMQ4NoeO0Ci/VvzJGtRRcesnwUVdYaLtVEu2abRccFLQceK0q1fCKwmV0ffrw+htbIfA3exs=) 2026-02-14 02:36:08.179442 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZRTTCMFnlId9WzXQB8nTPM3z01ka3tjz27Yc55VHha) 2026-02-14 02:36:08.179456 | orchestrator | 2026-02-14 02:36:08.179470 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:08.179483 | orchestrator | Saturday 14 February 2026 02:36:04 +0000 (0:00:01.187) 0:00:27.724 ***** 2026-02-14 02:36:08.179498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDiiXhqwOGfoSCFKxIPHmKWVfyFKaKfOZ7LZTqncXpVNc6/idn2VpT4uD8D2kiKtqw3apHkef2K6A9nemhCWa0BDtQN8qx3zw5TDJ0szqfcbysfxebFQl20Pt0J3S75lSEk2hesET5fQk69H9GGiVJp1jMrI3opZBgaHSr4vPR340zRYTcH+6x8Xmm7UukU8ShbdtJB9LRTFwXFiBQ4YBVBdCUc+vw4BzZT4AF7fftjYobxNOcTcYx1L318/Fgxh91JKTgjGEN1Rqo5A3KVI/UkQJ9vf5yM0DoIC3rEXS58/uCtxqIlx/78N8IBRmCsBiLAZGhqywFAShZeYcAtuilDCUW8Y4kShl0wFhajl3yR7fdPtlKNt/DpToyWf8m92XWDrxMzhx7YQxpGeZIerxzmUjcBs0iJorZK+KfFgycfkDQ/bLfvyQwChUoNsHHpRCfLgL3VhtJbJS1dq6RbaVGz/4e5xM3MT4+5vopXufxKSPlByfEc3rx9bogjX5e3OC0=) 2026-02-14 02:36:08.179512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGoR9E25da5oK345M5OrZ81CliDoL6SlYNWskzdyPxgZvAZsHOi1oS+ouwxtA6lAlX7yUiOV1LO8QEYwrNb/T1E=) 2026-02-14 02:36:08.179525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFzylB9WT3g/QLiZGCh/7FOa4uIqkOx8fd6EYAaFMVZO) 2026-02-14 02:36:08.179538 | orchestrator | 2026-02-14 02:36:08.179551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-14 02:36:08.179565 | orchestrator | Saturday 14 February 2026 02:36:05 +0000 (0:00:01.211) 0:00:28.936 ***** 2026-02-14 02:36:08.179578 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwIsSIMsO9peSbDgNvmlTdreBbn+QOp5lA3gJnIgiTc1TI0jUyKLg8vkzhwywr2Yc19ysQ7BoPn60hbzcJWKPuAF0o+GMJR8RIUnA2VXr72q2Ke1Xws2NGJAvaNaVEhiEmf4lyurFw6vtNIWSA5h6KaEDVDmBTf4MbZlZXhqy/Vidcw7aB4t0tzQpR2Hk7gQQjsPqDXDKdMet0MZoDxJlGUc8pUEzOGbMgK5eQIvGNLEiT5BQJiKxGRS7oTf+ltQe30NeP2HEX5Jci8Sc6aLhFmuR9iE/HgFvmzPEwfM+UVLZcLuw0YtG+UUIZn0B43dZDzRbfaRGDYNRyFjzT1apFvnMPYmw+5g3XWRMIUw7PsWTdkyBjK+MpdTgQt2bHZR3mklYDFnpxbJBycXclGlg4k+/SjNxkAamW5HkAipGMcyDb7cm7UxdeX44YxjUO1vcO9p13Pvpks9djuxfY9XyG0cngBnIpniY1LZiCAwrVWWfstGd4D5Jxu5YNmG15W70=) 2026-02-14 02:36:08.179642 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBSx7u8OAsfGqhXLANjGKxX+Utr0KdkpfXygTmXZ92uXyesJt2mYeF6Y7oi3bggcmRYpGRAigmxb6VDL72jRlnE=) 2026-02-14 02:36:08.179657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHoB69e5jWXk0mMr1xQAMPsqwFPWwTjsfBd6IXBoaUQi) 2026-02-14 02:36:08.179681 | orchestrator | 2026-02-14 02:36:08.179695 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-14 02:36:08.179735 | orchestrator | Saturday 14 February 2026 02:36:06 +0000 (0:00:01.230) 0:00:30.167 ***** 2026-02-14 02:36:08.179750 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-14 02:36:08.179764 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-14 02:36:08.179777 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-14 02:36:08.179791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-14 02:36:08.179805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-14 02:36:08.179818 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-14 02:36:08.179832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-14 02:36:08.179848 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:36:08.179863 | orchestrator | 2026-02-14 02:36:08.179901 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-14 02:36:08.179916 | orchestrator | Saturday 14 February 2026 02:36:06 +0000 (0:00:00.217) 0:00:30.384 ***** 2026-02-14 02:36:08.179931 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:36:08.179944 | orchestrator | 2026-02-14 02:36:08.179958 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-14 02:36:08.179981 | orchestrator | Saturday 14 February 2026 02:36:06 +0000 (0:00:00.070) 0:00:30.455 ***** 2026-02-14 02:36:08.179996 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:36:08.180009 | orchestrator | 2026-02-14 02:36:08.180023 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-14 02:36:08.180037 | orchestrator | Saturday 14 February 2026 02:36:07 +0000 (0:00:00.078) 0:00:30.533 ***** 2026-02-14 02:36:08.180051 | orchestrator | changed: [testbed-manager] 2026-02-14 02:36:08.180065 | orchestrator | 2026-02-14 02:36:08.180079 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:36:08.180093 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 02:36:08.180108 | orchestrator | 2026-02-14 02:36:08.180122 | orchestrator | 2026-02-14 02:36:08.180135 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:36:08.180149 | orchestrator | Saturday 14 February 2026 02:36:07 +0000 (0:00:00.900) 0:00:31.434 ***** 2026-02-14 02:36:08.180163 | orchestrator | =============================================================================== 2026-02-14 02:36:08.180177 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.30s 2026-02-14 02:36:08.180190 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.64s 2026-02-14 02:36:08.180204 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.22s 2026-02-14 02:36:08.180218 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-02-14 02:36:08.180230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-02-14 02:36:08.180243 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-14 02:36:08.180256 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-02-14 02:36:08.180269 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-14 02:36:08.180283 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-14 02:36:08.180296 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-14 02:36:08.180309 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-02-14 02:36:08.180322 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-14 02:36:08.180335 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-14 02:36:08.180349 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-14 02:36:08.180376 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-14 02:36:08.180390 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-14 02:36:08.180403 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.90s 2026-02-14 02:36:08.180417 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.24s 2026-02-14 02:36:08.180431 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.22s 2026-02-14 02:36:08.180445 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2026-02-14 02:36:08.614510 | orchestrator | + osism apply squid 2026-02-14 02:36:20.790744 | orchestrator | 2026-02-14 02:36:20 | INFO  | Task 25ceff8b-f2f6-4797-95e1-d43d31fd6f1b (squid) was prepared for execution. 2026-02-14 02:36:20.790860 | orchestrator | 2026-02-14 02:36:20 | INFO  | It takes a moment until task 25ceff8b-f2f6-4797-95e1-d43d31fd6f1b (squid) has been started and output is visible here. 2026-02-14 02:38:21.995872 | orchestrator | 2026-02-14 02:38:21.995961 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-14 02:38:21.995974 | orchestrator | 2026-02-14 02:38:21.995983 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-14 02:38:21.995992 | orchestrator | Saturday 14 February 2026 02:36:25 +0000 (0:00:00.179) 0:00:00.179 ***** 2026-02-14 02:38:21.996000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 02:38:21.996009 | orchestrator | 2026-02-14 02:38:21.996017 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-14 02:38:21.996024 | orchestrator | Saturday 14 February 2026 02:36:26 +0000 (0:00:00.108) 0:00:00.287 ***** 2026-02-14 02:38:21.996032 | orchestrator | ok: [testbed-manager] 2026-02-14 02:38:21.996040 | orchestrator | 2026-02-14 02:38:21.996048 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-14 02:38:21.996056 | orchestrator | Saturday 14 February 2026 02:36:28 +0000 (0:00:02.214) 0:00:02.501 ***** 2026-02-14 02:38:21.996065 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-14 02:38:21.996072 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-14 02:38:21.996080 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-14 02:38:21.996087 | orchestrator | 2026-02-14 02:38:21.996093 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-14 02:38:21.996100 | orchestrator | Saturday 14 February 2026 02:36:29 +0000 (0:00:01.404) 0:00:03.906 ***** 2026-02-14 02:38:21.996107 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-14 02:38:21.996113 | orchestrator | 2026-02-14 02:38:21.996120 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-14 02:38:21.996127 | orchestrator | Saturday 14 February 2026 02:36:31 +0000 (0:00:01.376) 0:00:05.283 ***** 2026-02-14 02:38:21.996133 | orchestrator | ok: [testbed-manager] 2026-02-14 02:38:21.996140 | orchestrator | 2026-02-14 02:38:21.996147 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-14 02:38:21.996153 | orchestrator | Saturday 14 February 2026 02:36:31 +0000 (0:00:00.466) 0:00:05.749 ***** 2026-02-14 02:38:21.996160 | orchestrator | changed: [testbed-manager] 2026-02-14 02:38:21.996168 | orchestrator | 2026-02-14 02:38:21.996174 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-14 02:38:21.996181 | orchestrator | Saturday 14 February 2026 02:36:32 +0000 (0:00:01.145) 0:00:06.895 ***** 2026-02-14 02:38:21.996188 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-14 02:38:21.996198 | orchestrator | ok: [testbed-manager] 2026-02-14 02:38:21.996204 | orchestrator | 2026-02-14 02:38:21.996211 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-14 02:38:21.996233 | orchestrator | Saturday 14 February 2026 02:37:08 +0000 (0:00:35.949) 0:00:42.845 ***** 2026-02-14 02:38:21.996239 | orchestrator | changed: [testbed-manager] 2026-02-14 02:38:21.996246 | orchestrator | 2026-02-14 02:38:21.996253 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-14 02:38:21.996259 | orchestrator | Saturday 14 February 2026 02:37:20 +0000 (0:00:11.964) 0:00:54.809 ***** 2026-02-14 02:38:21.996266 | orchestrator | Pausing for 60 seconds 2026-02-14 02:38:21.996273 | orchestrator | changed: [testbed-manager] 2026-02-14 02:38:21.996280 | orchestrator | 2026-02-14 02:38:21.996287 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-14 02:38:21.996293 | orchestrator | Saturday 14 February 2026 02:38:20 +0000 (0:01:00.136) 0:01:54.945 ***** 2026-02-14 02:38:21.996300 | orchestrator | ok: [testbed-manager] 2026-02-14 02:38:21.996307 | orchestrator | 2026-02-14 02:38:21.996318 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-14 02:38:21.996328 | orchestrator | Saturday 14 February 2026 02:38:20 +0000 (0:00:00.081) 0:01:55.027 ***** 2026-02-14 02:38:21.996339 | orchestrator | changed: [testbed-manager] 2026-02-14 02:38:21.996349 | orchestrator | 2026-02-14 02:38:21.996359 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:38:21.996370 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:38:21.996380 | orchestrator | 2026-02-14 02:38:21.996391 | orchestrator | 2026-02-14 02:38:21.996401 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:38:21.996412 | orchestrator | Saturday 14 February 2026 02:38:21 +0000 (0:00:00.721) 0:01:55.748 ***** 2026-02-14 02:38:21.996422 | orchestrator | =============================================================================== 2026-02-14 02:38:21.996433 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.14s 2026-02-14 02:38:21.996443 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.95s 2026-02-14 02:38:21.996453 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-02-14 02:38:21.996476 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.21s 2026-02-14 02:38:21.996488 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.40s 2026-02-14 02:38:21.996498 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.38s 2026-02-14 02:38:21.996509 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.15s 2026-02-14 02:38:21.996521 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.72s 2026-02-14 02:38:21.996532 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.47s 2026-02-14 02:38:21.996543 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2026-02-14 02:38:21.996554 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-14 02:38:22.542647 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-14 02:38:22.543192 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-14 02:38:22.603352 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 02:38:22.603445 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-14 02:38:22.612051 | orchestrator | + set -e 2026-02-14 02:38:22.612126 | orchestrator | + NAMESPACE=kolla/release 2026-02-14 02:38:22.612137 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-14 02:38:22.617935 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-14 02:38:22.685882 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-14 02:38:22.686356 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-14 02:38:35.174221 | orchestrator | 2026-02-14 02:38:35 | INFO  | Task 85d51604-88e8-4c17-9efb-7cc2c13e8642 (operator) was prepared for execution. 2026-02-14 02:38:35.174321 | orchestrator | 2026-02-14 02:38:35 | INFO  | It takes a moment until task 85d51604-88e8-4c17-9efb-7cc2c13e8642 (operator) has been started and output is visible here. 2026-02-14 02:38:52.324656 | orchestrator | 2026-02-14 02:38:52.324837 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-14 02:38:52.324858 | orchestrator | 2026-02-14 02:38:52.324871 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 02:38:52.324883 | orchestrator | Saturday 14 February 2026 02:38:39 +0000 (0:00:00.174) 0:00:00.174 ***** 2026-02-14 02:38:52.324894 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:38:52.324906 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:38:52.324917 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:38:52.324928 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:38:52.324939 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:38:52.324949 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:38:52.324960 | orchestrator | 2026-02-14 02:38:52.324971 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-14 02:38:52.324982 | orchestrator | Saturday 14 February 2026 02:38:43 +0000 (0:00:03.384) 0:00:03.559 ***** 2026-02-14 02:38:52.324993 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:38:52.325006 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:38:52.325027 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:38:52.325067 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:38:52.325087 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:38:52.325106 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:38:52.325123 | orchestrator | 2026-02-14 02:38:52.325142 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-14 02:38:52.325162 | orchestrator | 2026-02-14 02:38:52.325182 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-14 02:38:52.325201 | orchestrator | Saturday 14 February 2026 02:38:44 +0000 (0:00:00.814) 0:00:04.373 ***** 2026-02-14 02:38:52.325221 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:38:52.325241 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:38:52.325261 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:38:52.325281 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:38:52.325309 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:38:52.325331 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:38:52.325350 | orchestrator | 2026-02-14 02:38:52.325368 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-14 02:38:52.325386 | orchestrator | Saturday 14 February 2026 02:38:44 +0000 (0:00:00.236) 0:00:04.610 ***** 2026-02-14 02:38:52.325406 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:38:52.325427 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:38:52.325446 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:38:52.325464 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:38:52.325476 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:38:52.325487 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:38:52.325498 | orchestrator | 2026-02-14 02:38:52.325509 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-14 02:38:52.325519 | orchestrator | Saturday 14 February 2026 02:38:44 +0000 (0:00:00.218) 0:00:04.829 ***** 2026-02-14 02:38:52.325530 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:52.325542 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:52.325553 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:52.325564 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:52.325575 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:52.325586 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:52.325597 | orchestrator | 2026-02-14 02:38:52.325607 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-14 02:38:52.325618 | orchestrator | Saturday 14 February 2026 02:38:45 +0000 (0:00:00.674) 0:00:05.503 ***** 2026-02-14 02:38:52.325629 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:52.325640 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:52.325650 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:52.325661 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:52.325672 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:52.325683 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:52.325719 | orchestrator | 2026-02-14 02:38:52.325731 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-14 02:38:52.325781 | orchestrator | Saturday 14 February 2026 02:38:46 +0000 (0:00:00.829) 0:00:06.333 ***** 2026-02-14 02:38:52.325799 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-14 02:38:52.325826 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-14 02:38:52.325846 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-14 02:38:52.325864 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-14 02:38:52.325884 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-14 02:38:52.325902 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-14 02:38:52.325920 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-14 02:38:52.325935 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-14 02:38:52.325946 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-14 02:38:52.325956 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-14 02:38:52.325967 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-14 02:38:52.325977 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-14 02:38:52.325988 | orchestrator | 2026-02-14 02:38:52.325999 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-14 02:38:52.326010 | orchestrator | Saturday 14 February 2026 02:38:47 +0000 (0:00:01.181) 0:00:07.515 ***** 2026-02-14 02:38:52.326091 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:52.326102 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:52.326113 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:52.326124 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:52.326135 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:52.326146 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:52.326157 | orchestrator | 2026-02-14 02:38:52.326170 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-14 02:38:52.326261 | orchestrator | Saturday 14 February 2026 02:38:48 +0000 (0:00:01.264) 0:00:08.779 ***** 2026-02-14 02:38:52.326283 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-14 02:38:52.326303 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-14 02:38:52.326322 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-14 02:38:52.326341 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326387 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326399 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326410 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326421 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326431 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-14 02:38:52.326442 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326453 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326463 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326473 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326484 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326494 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-14 02:38:52.326505 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326516 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326526 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326537 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326548 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326573 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-14 02:38:52.326584 | orchestrator | 2026-02-14 02:38:52.326595 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-14 02:38:52.326607 | orchestrator | Saturday 14 February 2026 02:38:49 +0000 (0:00:01.272) 0:00:10.052 ***** 2026-02-14 02:38:52.326617 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:52.326628 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:52.326638 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:52.326649 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:52.326659 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:52.326670 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:52.326681 | orchestrator | 2026-02-14 02:38:52.326692 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-14 02:38:52.326702 | orchestrator | Saturday 14 February 2026 02:38:49 +0000 (0:00:00.201) 0:00:10.253 ***** 2026-02-14 02:38:52.326713 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:52.326723 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:52.326761 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:52.326773 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:52.326784 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:52.326794 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:52.326805 | orchestrator | 2026-02-14 02:38:52.326816 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-14 02:38:52.326826 | orchestrator | Saturday 14 February 2026 02:38:50 +0000 (0:00:00.217) 0:00:10.471 ***** 2026-02-14 02:38:52.326837 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:52.326848 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:52.326858 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:52.326869 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:52.326879 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:52.326889 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:52.326900 | orchestrator | 2026-02-14 02:38:52.326911 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-14 02:38:52.326921 | orchestrator | Saturday 14 February 2026 02:38:50 +0000 (0:00:00.648) 0:00:11.119 ***** 2026-02-14 02:38:52.326932 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:52.326943 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:52.326953 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:52.326964 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:52.326974 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:52.326985 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:52.326995 | orchestrator | 2026-02-14 02:38:52.327006 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-14 02:38:52.327016 | orchestrator | Saturday 14 February 2026 02:38:51 +0000 (0:00:00.225) 0:00:11.345 ***** 2026-02-14 02:38:52.327027 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 02:38:52.327050 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:52.327062 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 02:38:52.327072 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:52.327083 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 02:38:52.327093 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:52.327104 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-14 02:38:52.327114 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:52.327125 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 02:38:52.327135 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:52.327146 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-14 02:38:52.327156 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:52.327167 | orchestrator | 2026-02-14 02:38:52.327177 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-14 02:38:52.327188 | orchestrator | Saturday 14 February 2026 02:38:51 +0000 (0:00:00.809) 0:00:12.154 ***** 2026-02-14 02:38:52.327206 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:52.327216 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:52.327227 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:52.327237 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:52.327248 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:52.327258 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:52.327269 | orchestrator | 2026-02-14 02:38:52.327280 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-14 02:38:52.327290 | orchestrator | Saturday 14 February 2026 02:38:52 +0000 (0:00:00.217) 0:00:12.372 ***** 2026-02-14 02:38:52.327301 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:52.327312 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:52.327322 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:52.327333 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:52.327351 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:53.867642 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:53.867728 | orchestrator | 2026-02-14 02:38:53.867801 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-14 02:38:53.867815 | orchestrator | Saturday 14 February 2026 02:38:52 +0000 (0:00:00.206) 0:00:12.579 ***** 2026-02-14 02:38:53.867826 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:53.867838 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:53.867849 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:53.867860 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:53.867871 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:53.867882 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:53.867893 | orchestrator | 2026-02-14 02:38:53.867904 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-14 02:38:53.867915 | orchestrator | Saturday 14 February 2026 02:38:52 +0000 (0:00:00.179) 0:00:12.758 ***** 2026-02-14 02:38:53.867925 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:38:53.867936 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:38:53.867966 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:38:53.867978 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:38:53.867989 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:38:53.868000 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:38:53.868010 | orchestrator | 2026-02-14 02:38:53.868021 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-14 02:38:53.868032 | orchestrator | Saturday 14 February 2026 02:38:53 +0000 (0:00:00.676) 0:00:13.434 ***** 2026-02-14 02:38:53.868043 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:38:53.868053 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:38:53.868079 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:38:53.868091 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:38:53.868102 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:38:53.868112 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:38:53.868123 | orchestrator | 2026-02-14 02:38:53.868134 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:38:53.868146 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868159 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868170 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868181 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868191 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868226 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 02:38:53.868239 | orchestrator | 2026-02-14 02:38:53.868251 | orchestrator | 2026-02-14 02:38:53.868263 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:38:53.868276 | orchestrator | Saturday 14 February 2026 02:38:53 +0000 (0:00:00.307) 0:00:13.742 ***** 2026-02-14 02:38:53.868289 | orchestrator | =============================================================================== 2026-02-14 02:38:53.868301 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-02-14 02:38:53.868313 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-02-14 02:38:53.868327 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-02-14 02:38:53.868340 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-02-14 02:38:53.868352 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2026-02-14 02:38:53.868365 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2026-02-14 02:38:53.868377 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2026-02-14 02:38:53.868389 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-02-14 02:38:53.868400 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-02-14 02:38:53.868410 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2026-02-14 02:38:53.868421 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.31s 2026-02-14 02:38:53.868432 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.24s 2026-02-14 02:38:53.868443 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.23s 2026-02-14 02:38:53.868454 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2026-02-14 02:38:53.868465 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.22s 2026-02-14 02:38:53.868476 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.22s 2026-02-14 02:38:53.868487 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.21s 2026-02-14 02:38:53.868498 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.20s 2026-02-14 02:38:53.868509 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-02-14 02:38:54.273987 | orchestrator | + osism apply --environment custom facts 2026-02-14 02:38:56.552327 | orchestrator | 2026-02-14 02:38:56 | INFO  | Trying to run play facts in environment custom 2026-02-14 02:39:06.701586 | orchestrator | 2026-02-14 02:39:06 | INFO  | Task 0517f630-b3de-476b-8357-b25a588b5d5f (facts) was prepared for execution. 2026-02-14 02:39:06.701733 | orchestrator | 2026-02-14 02:39:06 | INFO  | It takes a moment until task 0517f630-b3de-476b-8357-b25a588b5d5f (facts) has been started and output is visible here. 2026-02-14 02:39:49.730155 | orchestrator | 2026-02-14 02:39:49.730259 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-14 02:39:49.730270 | orchestrator | 2026-02-14 02:39:49.730279 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-14 02:39:49.730287 | orchestrator | Saturday 14 February 2026 02:39:11 +0000 (0:00:00.104) 0:00:00.104 ***** 2026-02-14 02:39:49.730295 | orchestrator | ok: [testbed-manager] 2026-02-14 02:39:49.730304 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.730312 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:39:49.730320 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:39:49.730327 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.730335 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.730361 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:39:49.730369 | orchestrator | 2026-02-14 02:39:49.730377 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-14 02:39:49.730385 | orchestrator | Saturday 14 February 2026 02:39:13 +0000 (0:00:01.394) 0:00:01.499 ***** 2026-02-14 02:39:49.730392 | orchestrator | ok: [testbed-manager] 2026-02-14 02:39:49.730399 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.730407 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:39:49.730414 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.730421 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.730429 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:39:49.730436 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:39:49.730443 | orchestrator | 2026-02-14 02:39:49.730450 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-14 02:39:49.730458 | orchestrator | 2026-02-14 02:39:49.730466 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-14 02:39:49.730473 | orchestrator | Saturday 14 February 2026 02:39:14 +0000 (0:00:01.246) 0:00:02.745 ***** 2026-02-14 02:39:49.730480 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.730488 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.730495 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.730503 | orchestrator | 2026-02-14 02:39:49.730510 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-14 02:39:49.730518 | orchestrator | Saturday 14 February 2026 02:39:14 +0000 (0:00:00.133) 0:00:02.879 ***** 2026-02-14 02:39:49.730526 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.730533 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.730540 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.730548 | orchestrator | 2026-02-14 02:39:49.730555 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-14 02:39:49.730562 | orchestrator | Saturday 14 February 2026 02:39:14 +0000 (0:00:00.232) 0:00:03.111 ***** 2026-02-14 02:39:49.730570 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.730577 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.730584 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.730592 | orchestrator | 2026-02-14 02:39:49.730599 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-14 02:39:49.730607 | orchestrator | Saturday 14 February 2026 02:39:14 +0000 (0:00:00.305) 0:00:03.416 ***** 2026-02-14 02:39:49.730617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:39:49.730625 | orchestrator | 2026-02-14 02:39:49.730633 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-14 02:39:49.730641 | orchestrator | Saturday 14 February 2026 02:39:15 +0000 (0:00:00.180) 0:00:03.597 ***** 2026-02-14 02:39:49.730648 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.730655 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.730664 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.730672 | orchestrator | 2026-02-14 02:39:49.730680 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-14 02:39:49.730689 | orchestrator | Saturday 14 February 2026 02:39:15 +0000 (0:00:00.468) 0:00:04.066 ***** 2026-02-14 02:39:49.730698 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:39:49.730706 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:39:49.730715 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:39:49.730723 | orchestrator | 2026-02-14 02:39:49.730731 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-14 02:39:49.730740 | orchestrator | Saturday 14 February 2026 02:39:15 +0000 (0:00:00.201) 0:00:04.268 ***** 2026-02-14 02:39:49.730749 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.730757 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.730766 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.730791 | orchestrator | 2026-02-14 02:39:49.730800 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-14 02:39:49.730814 | orchestrator | Saturday 14 February 2026 02:39:16 +0000 (0:00:01.092) 0:00:05.360 ***** 2026-02-14 02:39:49.730823 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.730832 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.730839 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.730848 | orchestrator | 2026-02-14 02:39:49.730856 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-14 02:39:49.730864 | orchestrator | Saturday 14 February 2026 02:39:17 +0000 (0:00:00.504) 0:00:05.864 ***** 2026-02-14 02:39:49.730872 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.730880 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.730889 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.730897 | orchestrator | 2026-02-14 02:39:49.730906 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-14 02:39:49.730952 | orchestrator | Saturday 14 February 2026 02:39:18 +0000 (0:00:01.075) 0:00:06.939 ***** 2026-02-14 02:39:49.730960 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.730968 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.730975 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.731064 | orchestrator | 2026-02-14 02:39:49.731072 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-14 02:39:49.731080 | orchestrator | Saturday 14 February 2026 02:39:33 +0000 (0:00:15.278) 0:00:22.218 ***** 2026-02-14 02:39:49.731087 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:39:49.731094 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:39:49.731102 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:39:49.731109 | orchestrator | 2026-02-14 02:39:49.731116 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-14 02:39:49.731139 | orchestrator | Saturday 14 February 2026 02:39:33 +0000 (0:00:00.123) 0:00:22.341 ***** 2026-02-14 02:39:49.731147 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:39:49.731154 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:39:49.731161 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:39:49.731168 | orchestrator | 2026-02-14 02:39:49.731181 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-14 02:39:49.731189 | orchestrator | Saturday 14 February 2026 02:39:40 +0000 (0:00:07.006) 0:00:29.347 ***** 2026-02-14 02:39:49.731196 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.731204 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.731211 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.731218 | orchestrator | 2026-02-14 02:39:49.731225 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-14 02:39:49.731232 | orchestrator | Saturday 14 February 2026 02:39:41 +0000 (0:00:00.451) 0:00:29.799 ***** 2026-02-14 02:39:49.731240 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-14 02:39:49.731248 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-14 02:39:49.731255 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-14 02:39:49.731262 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-14 02:39:49.731269 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-14 02:39:49.731276 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-14 02:39:49.731283 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-14 02:39:49.731291 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-14 02:39:49.731298 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-14 02:39:49.731305 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-14 02:39:49.731312 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-14 02:39:49.731319 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-14 02:39:49.731326 | orchestrator | 2026-02-14 02:39:49.731334 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-14 02:39:49.731347 | orchestrator | Saturday 14 February 2026 02:39:44 +0000 (0:00:03.485) 0:00:33.284 ***** 2026-02-14 02:39:49.731355 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.731362 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.731369 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.731376 | orchestrator | 2026-02-14 02:39:49.731383 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 02:39:49.731391 | orchestrator | 2026-02-14 02:39:49.731398 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 02:39:49.731405 | orchestrator | Saturday 14 February 2026 02:39:46 +0000 (0:00:01.336) 0:00:34.620 ***** 2026-02-14 02:39:49.731412 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:39:49.731420 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:39:49.731427 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:39:49.731434 | orchestrator | ok: [testbed-manager] 2026-02-14 02:39:49.731441 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:39:49.731448 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:39:49.731456 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:39:49.731463 | orchestrator | 2026-02-14 02:39:49.731470 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:39:49.731478 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:39:49.731487 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:39:49.731495 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:39:49.731503 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:39:49.731510 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:39:49.731518 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:39:49.731525 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:39:49.731532 | orchestrator | 2026-02-14 02:39:49.731539 | orchestrator | 2026-02-14 02:39:49.731547 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:39:49.731554 | orchestrator | Saturday 14 February 2026 02:39:49 +0000 (0:00:03.554) 0:00:38.175 ***** 2026-02-14 02:39:49.731561 | orchestrator | =============================================================================== 2026-02-14 02:39:49.731569 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.28s 2026-02-14 02:39:49.731576 | orchestrator | Install required packages (Debian) -------------------------------------- 7.01s 2026-02-14 02:39:49.731583 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.56s 2026-02-14 02:39:49.731590 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2026-02-14 02:39:49.731597 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-02-14 02:39:49.731605 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.34s 2026-02-14 02:39:49.731616 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2026-02-14 02:39:50.100433 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2026-02-14 02:39:50.100518 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-02-14 02:39:50.100554 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-02-14 02:39:50.100581 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-02-14 02:39:50.100590 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-02-14 02:39:50.100598 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.31s 2026-02-14 02:39:50.100607 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-02-14 02:39:50.100615 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.20s 2026-02-14 02:39:50.100624 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.18s 2026-02-14 02:39:50.100634 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-02-14 02:39:50.100642 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-14 02:39:50.559155 | orchestrator | + osism apply bootstrap 2026-02-14 02:40:03.097443 | orchestrator | 2026-02-14 02:40:03 | INFO  | Task 911fa538-7825-48c4-b951-c5b5aab7f32c (bootstrap) was prepared for execution. 2026-02-14 02:40:03.097547 | orchestrator | 2026-02-14 02:40:03 | INFO  | It takes a moment until task 911fa538-7825-48c4-b951-c5b5aab7f32c (bootstrap) has been started and output is visible here. 2026-02-14 02:40:20.938111 | orchestrator | 2026-02-14 02:40:20.938223 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-14 02:40:20.938236 | orchestrator | 2026-02-14 02:40:20.938246 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-14 02:40:20.938254 | orchestrator | Saturday 14 February 2026 02:40:08 +0000 (0:00:00.186) 0:00:00.186 ***** 2026-02-14 02:40:20.938263 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:20.938271 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:20.938279 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:20.938287 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:20.938295 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:20.938303 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:20.938310 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:20.938319 | orchestrator | 2026-02-14 02:40:20.938327 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 02:40:20.938335 | orchestrator | 2026-02-14 02:40:20.938343 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 02:40:20.938350 | orchestrator | Saturday 14 February 2026 02:40:08 +0000 (0:00:00.340) 0:00:00.527 ***** 2026-02-14 02:40:20.938358 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:20.938366 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:20.938374 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:20.938382 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:20.938389 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:20.938397 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:20.938405 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:20.938412 | orchestrator | 2026-02-14 02:40:20.938420 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-14 02:40:20.938428 | orchestrator | 2026-02-14 02:40:20.938436 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 02:40:20.938444 | orchestrator | Saturday 14 February 2026 02:40:12 +0000 (0:00:03.651) 0:00:04.178 ***** 2026-02-14 02:40:20.938453 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-14 02:40:20.938461 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-14 02:40:20.938469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-14 02:40:20.938477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-14 02:40:20.938485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 02:40:20.938492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-14 02:40:20.938500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 02:40:20.938508 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-14 02:40:20.938516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 02:40:20.938547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-14 02:40:20.938555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 02:40:20.938563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 02:40:20.938571 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-14 02:40:20.938579 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 02:40:20.938588 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 02:40:20.938598 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-14 02:40:20.938608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-14 02:40:20.938616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 02:40:20.938626 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 02:40:20.938634 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 02:40:20.938643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 02:40:20.938652 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:20.938660 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-14 02:40:20.938669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 02:40:20.938678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 02:40:20.938686 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 02:40:20.938695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 02:40:20.938704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-14 02:40:20.938713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 02:40:20.938722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 02:40:20.938730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 02:40:20.938739 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-14 02:40:20.938748 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:20.938757 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 02:40:20.938766 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 02:40:20.938775 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:20.938783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 02:40:20.938813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 02:40:20.938822 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 02:40:20.938831 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 02:40:20.938840 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 02:40:20.938849 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:20.938858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 02:40:20.938867 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 02:40:20.938876 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 02:40:20.938885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 02:40:20.938910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 02:40:20.938919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 02:40:20.938928 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 02:40:20.938937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 02:40:20.938946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 02:40:20.938954 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:20.938962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 02:40:20.938970 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:20.938985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 02:40:20.939008 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:20.939017 | orchestrator | 2026-02-14 02:40:20.939025 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-14 02:40:20.939033 | orchestrator | 2026-02-14 02:40:20.939041 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-14 02:40:20.939060 | orchestrator | Saturday 14 February 2026 02:40:12 +0000 (0:00:00.587) 0:00:04.766 ***** 2026-02-14 02:40:20.939081 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:20.939090 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:20.939098 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:20.939105 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:20.939113 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:20.939121 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:20.939129 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:20.939137 | orchestrator | 2026-02-14 02:40:20.939145 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-14 02:40:20.939152 | orchestrator | Saturday 14 February 2026 02:40:14 +0000 (0:00:01.326) 0:00:06.092 ***** 2026-02-14 02:40:20.939160 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:20.939168 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:20.939180 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:20.939194 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:20.939207 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:20.939221 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:20.939234 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:20.939248 | orchestrator | 2026-02-14 02:40:20.939262 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-14 02:40:20.939276 | orchestrator | Saturday 14 February 2026 02:40:15 +0000 (0:00:01.336) 0:00:07.428 ***** 2026-02-14 02:40:20.939290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:20.939306 | orchestrator | 2026-02-14 02:40:20.939320 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-14 02:40:20.939333 | orchestrator | Saturday 14 February 2026 02:40:15 +0000 (0:00:00.373) 0:00:07.802 ***** 2026-02-14 02:40:20.939347 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:20.939362 | orchestrator | changed: [testbed-manager] 2026-02-14 02:40:20.939376 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:20.939389 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:20.939402 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:20.939414 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:20.939426 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:20.939440 | orchestrator | 2026-02-14 02:40:20.939453 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-14 02:40:20.939466 | orchestrator | Saturday 14 February 2026 02:40:18 +0000 (0:00:02.362) 0:00:10.165 ***** 2026-02-14 02:40:20.939480 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:20.939495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:20.939510 | orchestrator | 2026-02-14 02:40:20.939524 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-14 02:40:20.939538 | orchestrator | Saturday 14 February 2026 02:40:18 +0000 (0:00:00.358) 0:00:10.523 ***** 2026-02-14 02:40:20.939552 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:20.939564 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:20.939577 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:20.939590 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:20.939603 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:20.939615 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:20.939640 | orchestrator | 2026-02-14 02:40:20.939663 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-14 02:40:20.939679 | orchestrator | Saturday 14 February 2026 02:40:19 +0000 (0:00:01.087) 0:00:11.610 ***** 2026-02-14 02:40:20.939694 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:20.939708 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:20.939723 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:20.939738 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:20.939752 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:20.939766 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:20.939779 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:20.939821 | orchestrator | 2026-02-14 02:40:20.939836 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-14 02:40:20.939851 | orchestrator | Saturday 14 February 2026 02:40:20 +0000 (0:00:00.665) 0:00:12.276 ***** 2026-02-14 02:40:20.939864 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:20.939878 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:20.939892 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:20.939906 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:20.939921 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:20.939935 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:20.939949 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:20.939964 | orchestrator | 2026-02-14 02:40:20.939979 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-14 02:40:20.939996 | orchestrator | Saturday 14 February 2026 02:40:20 +0000 (0:00:00.556) 0:00:12.832 ***** 2026-02-14 02:40:20.940010 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:20.940025 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:20.940055 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:34.480682 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:34.480867 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:34.480893 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:34.480907 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:34.480921 | orchestrator | 2026-02-14 02:40:34.480937 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-14 02:40:34.480953 | orchestrator | Saturday 14 February 2026 02:40:21 +0000 (0:00:00.268) 0:00:13.100 ***** 2026-02-14 02:40:34.480968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:34.480998 | orchestrator | 2026-02-14 02:40:34.481012 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-14 02:40:34.481027 | orchestrator | Saturday 14 February 2026 02:40:21 +0000 (0:00:00.362) 0:00:13.462 ***** 2026-02-14 02:40:34.481041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:34.481056 | orchestrator | 2026-02-14 02:40:34.481070 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-14 02:40:34.481083 | orchestrator | Saturday 14 February 2026 02:40:21 +0000 (0:00:00.373) 0:00:13.836 ***** 2026-02-14 02:40:34.481096 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.481110 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.481122 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.481136 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.481167 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.481192 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.481207 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.481222 | orchestrator | 2026-02-14 02:40:34.481236 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-14 02:40:34.481251 | orchestrator | Saturday 14 February 2026 02:40:23 +0000 (0:00:01.634) 0:00:15.471 ***** 2026-02-14 02:40:34.481295 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:34.481310 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:34.481324 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:34.481338 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:34.481351 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:34.481364 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:34.481377 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:34.481390 | orchestrator | 2026-02-14 02:40:34.481403 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-14 02:40:34.481418 | orchestrator | Saturday 14 February 2026 02:40:23 +0000 (0:00:00.279) 0:00:15.751 ***** 2026-02-14 02:40:34.481430 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.481443 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.481455 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.481467 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.481480 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.481492 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.481505 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.481518 | orchestrator | 2026-02-14 02:40:34.481531 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-14 02:40:34.481545 | orchestrator | Saturday 14 February 2026 02:40:24 +0000 (0:00:00.566) 0:00:16.317 ***** 2026-02-14 02:40:34.481558 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:34.481572 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:34.481585 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:34.481598 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:34.481612 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:34.481626 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:34.481639 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:34.481653 | orchestrator | 2026-02-14 02:40:34.481667 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-14 02:40:34.481682 | orchestrator | Saturday 14 February 2026 02:40:24 +0000 (0:00:00.462) 0:00:16.780 ***** 2026-02-14 02:40:34.481695 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.481708 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:34.481722 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:34.481735 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:34.481749 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:34.481761 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:34.481787 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:34.481827 | orchestrator | 2026-02-14 02:40:34.481841 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-14 02:40:34.481855 | orchestrator | Saturday 14 February 2026 02:40:25 +0000 (0:00:00.608) 0:00:17.388 ***** 2026-02-14 02:40:34.481868 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.481882 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:34.481896 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:34.481909 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:34.481919 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:34.481950 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:34.481972 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:34.481984 | orchestrator | 2026-02-14 02:40:34.481996 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-14 02:40:34.482008 | orchestrator | Saturday 14 February 2026 02:40:26 +0000 (0:00:01.198) 0:00:18.587 ***** 2026-02-14 02:40:34.482089 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.482103 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.482116 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.482129 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.482143 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.482156 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.482170 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.482183 | orchestrator | 2026-02-14 02:40:34.482196 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-14 02:40:34.482226 | orchestrator | Saturday 14 February 2026 02:40:27 +0000 (0:00:01.186) 0:00:19.773 ***** 2026-02-14 02:40:34.482267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:34.482283 | orchestrator | 2026-02-14 02:40:34.482298 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-14 02:40:34.482312 | orchestrator | Saturday 14 February 2026 02:40:28 +0000 (0:00:00.366) 0:00:20.140 ***** 2026-02-14 02:40:34.482325 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:34.482339 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:40:34.482352 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:40:34.482366 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:40:34.482380 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:34.482393 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:34.482406 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:34.482419 | orchestrator | 2026-02-14 02:40:34.482432 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-14 02:40:34.482446 | orchestrator | Saturday 14 February 2026 02:40:29 +0000 (0:00:01.333) 0:00:21.473 ***** 2026-02-14 02:40:34.482462 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.482475 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.482488 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.482501 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.482515 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.482530 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.482544 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.482557 | orchestrator | 2026-02-14 02:40:34.482572 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-14 02:40:34.482586 | orchestrator | Saturday 14 February 2026 02:40:29 +0000 (0:00:00.298) 0:00:21.771 ***** 2026-02-14 02:40:34.482600 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.482613 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.482626 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.482639 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.482652 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.482665 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.482680 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.482694 | orchestrator | 2026-02-14 02:40:34.482708 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-14 02:40:34.482722 | orchestrator | Saturday 14 February 2026 02:40:29 +0000 (0:00:00.257) 0:00:22.029 ***** 2026-02-14 02:40:34.482736 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.482748 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.482761 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.482774 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.482787 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.482828 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.482842 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.482855 | orchestrator | 2026-02-14 02:40:34.482868 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-14 02:40:34.482882 | orchestrator | Saturday 14 February 2026 02:40:30 +0000 (0:00:00.293) 0:00:22.322 ***** 2026-02-14 02:40:34.482896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:40:34.482912 | orchestrator | 2026-02-14 02:40:34.482926 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-14 02:40:34.482940 | orchestrator | Saturday 14 February 2026 02:40:30 +0000 (0:00:00.351) 0:00:22.674 ***** 2026-02-14 02:40:34.482952 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.482965 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.482993 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.483008 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.483022 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.483035 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.483048 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.483062 | orchestrator | 2026-02-14 02:40:34.483077 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-14 02:40:34.483092 | orchestrator | Saturday 14 February 2026 02:40:31 +0000 (0:00:00.620) 0:00:23.294 ***** 2026-02-14 02:40:34.483107 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:40:34.483122 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:40:34.483135 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:40:34.483148 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:40:34.483161 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:40:34.483173 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:40:34.483187 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:40:34.483200 | orchestrator | 2026-02-14 02:40:34.483214 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-14 02:40:34.483226 | orchestrator | Saturday 14 February 2026 02:40:31 +0000 (0:00:00.290) 0:00:23.585 ***** 2026-02-14 02:40:34.483240 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.483251 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.483259 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.483267 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.483275 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:40:34.483283 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:40:34.483290 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:40:34.483298 | orchestrator | 2026-02-14 02:40:34.483306 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-14 02:40:34.483314 | orchestrator | Saturday 14 February 2026 02:40:32 +0000 (0:00:01.107) 0:00:24.692 ***** 2026-02-14 02:40:34.483322 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.483329 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.483337 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.483345 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.483353 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:40:34.483360 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:40:34.483368 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:40:34.483376 | orchestrator | 2026-02-14 02:40:34.483384 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-14 02:40:34.483392 | orchestrator | Saturday 14 February 2026 02:40:33 +0000 (0:00:00.611) 0:00:25.303 ***** 2026-02-14 02:40:34.483399 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:40:34.483407 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:40:34.483415 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:40:34.483434 | orchestrator | ok: [testbed-manager] 2026-02-14 02:40:34.483456 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.703135 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.703225 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.703235 | orchestrator | 2026-02-14 02:41:16.703242 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-14 02:41:16.703250 | orchestrator | Saturday 14 February 2026 02:40:34 +0000 (0:00:01.209) 0:00:26.513 ***** 2026-02-14 02:41:16.703257 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703263 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703269 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703275 | orchestrator | changed: [testbed-manager] 2026-02-14 02:41:16.703281 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.703287 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.703293 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.703299 | orchestrator | 2026-02-14 02:41:16.703305 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-14 02:41:16.703311 | orchestrator | Saturday 14 February 2026 02:40:49 +0000 (0:00:14.965) 0:00:41.478 ***** 2026-02-14 02:41:16.703317 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.703337 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703343 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703348 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703354 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.703359 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.703365 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.703371 | orchestrator | 2026-02-14 02:41:16.703376 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-14 02:41:16.703382 | orchestrator | Saturday 14 February 2026 02:40:49 +0000 (0:00:00.288) 0:00:41.767 ***** 2026-02-14 02:41:16.703388 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.703393 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703399 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703405 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703410 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.703416 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.703422 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.703427 | orchestrator | 2026-02-14 02:41:16.703433 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-14 02:41:16.703439 | orchestrator | Saturday 14 February 2026 02:40:49 +0000 (0:00:00.252) 0:00:42.019 ***** 2026-02-14 02:41:16.703444 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.703450 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703456 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703461 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703470 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.703479 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.703488 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.703501 | orchestrator | 2026-02-14 02:41:16.703513 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-14 02:41:16.703523 | orchestrator | Saturday 14 February 2026 02:40:50 +0000 (0:00:00.278) 0:00:42.297 ***** 2026-02-14 02:41:16.703535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:41:16.703546 | orchestrator | 2026-02-14 02:41:16.703555 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-14 02:41:16.703564 | orchestrator | Saturday 14 February 2026 02:40:50 +0000 (0:00:00.351) 0:00:42.649 ***** 2026-02-14 02:41:16.703573 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703582 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703590 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703598 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.703607 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.703615 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.703624 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.703633 | orchestrator | 2026-02-14 02:41:16.703642 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-14 02:41:16.703652 | orchestrator | Saturday 14 February 2026 02:40:52 +0000 (0:00:01.718) 0:00:44.367 ***** 2026-02-14 02:41:16.703661 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:41:16.703669 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:41:16.703678 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:41:16.703687 | orchestrator | changed: [testbed-manager] 2026-02-14 02:41:16.703696 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.703705 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.703714 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.703723 | orchestrator | 2026-02-14 02:41:16.703732 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-14 02:41:16.703749 | orchestrator | Saturday 14 February 2026 02:40:53 +0000 (0:00:01.100) 0:00:45.468 ***** 2026-02-14 02:41:16.703760 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.703769 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.703779 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.703797 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.703803 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.703809 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.703814 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.703820 | orchestrator | 2026-02-14 02:41:16.703895 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-14 02:41:16.703907 | orchestrator | Saturday 14 February 2026 02:40:54 +0000 (0:00:00.856) 0:00:46.324 ***** 2026-02-14 02:41:16.703915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:41:16.703922 | orchestrator | 2026-02-14 02:41:16.703929 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-14 02:41:16.703936 | orchestrator | Saturday 14 February 2026 02:40:54 +0000 (0:00:00.370) 0:00:46.695 ***** 2026-02-14 02:41:16.703941 | orchestrator | changed: [testbed-manager] 2026-02-14 02:41:16.703947 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:41:16.703953 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:41:16.703958 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:41:16.703964 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.703970 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.703976 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.703981 | orchestrator | 2026-02-14 02:41:16.704000 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-14 02:41:16.704006 | orchestrator | Saturday 14 February 2026 02:40:55 +0000 (0:00:01.050) 0:00:47.745 ***** 2026-02-14 02:41:16.704012 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:41:16.704018 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:41:16.704024 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:41:16.704029 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:41:16.704035 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:41:16.704041 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:41:16.704046 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:41:16.704052 | orchestrator | 2026-02-14 02:41:16.704058 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-14 02:41:16.704064 | orchestrator | Saturday 14 February 2026 02:40:56 +0000 (0:00:00.306) 0:00:48.052 ***** 2026-02-14 02:41:16.704070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:41:16.704076 | orchestrator | 2026-02-14 02:41:16.704081 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-14 02:41:16.704087 | orchestrator | Saturday 14 February 2026 02:40:56 +0000 (0:00:00.399) 0:00:48.452 ***** 2026-02-14 02:41:16.704093 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.704098 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.704104 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.704110 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.704115 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.704121 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.704127 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.704132 | orchestrator | 2026-02-14 02:41:16.704138 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-14 02:41:16.704144 | orchestrator | Saturday 14 February 2026 02:40:58 +0000 (0:00:01.775) 0:00:50.228 ***** 2026-02-14 02:41:16.704150 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:41:16.704155 | orchestrator | changed: [testbed-manager] 2026-02-14 02:41:16.704161 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:41:16.704167 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:41:16.704172 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.704178 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.704184 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.704195 | orchestrator | 2026-02-14 02:41:16.704201 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-14 02:41:16.704207 | orchestrator | Saturday 14 February 2026 02:40:59 +0000 (0:00:01.204) 0:00:51.432 ***** 2026-02-14 02:41:16.704213 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:41:16.704219 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:41:16.704224 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:41:16.704230 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:41:16.704236 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:41:16.704241 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:41:16.704247 | orchestrator | changed: [testbed-manager] 2026-02-14 02:41:16.704253 | orchestrator | 2026-02-14 02:41:16.704259 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-14 02:41:16.704264 | orchestrator | Saturday 14 February 2026 02:41:13 +0000 (0:00:13.973) 0:01:05.405 ***** 2026-02-14 02:41:16.704270 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.704276 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.704281 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.704287 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.704293 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.704299 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.704304 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.704310 | orchestrator | 2026-02-14 02:41:16.704316 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-14 02:41:16.704321 | orchestrator | Saturday 14 February 2026 02:41:14 +0000 (0:00:01.314) 0:01:06.720 ***** 2026-02-14 02:41:16.704327 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.704333 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.704338 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.704344 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.704350 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.704355 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.704361 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.704367 | orchestrator | 2026-02-14 02:41:16.704372 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-14 02:41:16.704378 | orchestrator | Saturday 14 February 2026 02:41:15 +0000 (0:00:00.993) 0:01:07.713 ***** 2026-02-14 02:41:16.704389 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.704395 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.704400 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.704406 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.704412 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.704417 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.704423 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.704429 | orchestrator | 2026-02-14 02:41:16.704434 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-14 02:41:16.704440 | orchestrator | Saturday 14 February 2026 02:41:15 +0000 (0:00:00.277) 0:01:07.990 ***** 2026-02-14 02:41:16.704446 | orchestrator | ok: [testbed-manager] 2026-02-14 02:41:16.704452 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:41:16.704457 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:41:16.704463 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:41:16.704469 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:41:16.704474 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:41:16.704480 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:41:16.704485 | orchestrator | 2026-02-14 02:41:16.704491 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-14 02:41:16.704497 | orchestrator | Saturday 14 February 2026 02:41:16 +0000 (0:00:00.310) 0:01:08.301 ***** 2026-02-14 02:41:16.704503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:41:16.704510 | orchestrator | 2026-02-14 02:41:16.704520 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-14 02:43:40.734196 | orchestrator | Saturday 14 February 2026 02:41:16 +0000 (0:00:00.437) 0:01:08.738 ***** 2026-02-14 02:43:40.734313 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.734330 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.734341 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.734352 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.734363 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.734373 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.734384 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.734395 | orchestrator | 2026-02-14 02:43:40.734408 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-14 02:43:40.734420 | orchestrator | Saturday 14 February 2026 02:41:18 +0000 (0:00:01.739) 0:01:10.478 ***** 2026-02-14 02:43:40.734431 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:43:40.734442 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:43:40.734453 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:43:40.734464 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:43:40.734475 | orchestrator | changed: [testbed-manager] 2026-02-14 02:43:40.734485 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:43:40.734496 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:43:40.734506 | orchestrator | 2026-02-14 02:43:40.734517 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-14 02:43:40.734529 | orchestrator | Saturday 14 February 2026 02:41:19 +0000 (0:00:00.613) 0:01:11.091 ***** 2026-02-14 02:43:40.734540 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.734550 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.734561 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.734572 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.734582 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.734593 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.734604 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.734614 | orchestrator | 2026-02-14 02:43:40.734626 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-14 02:43:40.734637 | orchestrator | Saturday 14 February 2026 02:41:19 +0000 (0:00:00.289) 0:01:11.381 ***** 2026-02-14 02:43:40.734648 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.734659 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.734669 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.734680 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.734690 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.734701 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.734712 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.734723 | orchestrator | 2026-02-14 02:43:40.734734 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-14 02:43:40.734745 | orchestrator | Saturday 14 February 2026 02:41:20 +0000 (0:00:01.223) 0:01:12.605 ***** 2026-02-14 02:43:40.734755 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:43:40.734766 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:43:40.734777 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:43:40.734788 | orchestrator | changed: [testbed-manager] 2026-02-14 02:43:40.734799 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:43:40.734809 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:43:40.734820 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:43:40.734831 | orchestrator | 2026-02-14 02:43:40.734846 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-14 02:43:40.734858 | orchestrator | Saturday 14 February 2026 02:41:22 +0000 (0:00:01.752) 0:01:14.358 ***** 2026-02-14 02:43:40.734868 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.734879 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.734890 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.734901 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.734911 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.734953 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.734965 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.734976 | orchestrator | 2026-02-14 02:43:40.734987 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-14 02:43:40.735023 | orchestrator | Saturday 14 February 2026 02:41:24 +0000 (0:00:02.513) 0:01:16.872 ***** 2026-02-14 02:43:40.735034 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.735045 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.735055 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.735066 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.735077 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.735088 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.735099 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.735109 | orchestrator | 2026-02-14 02:43:40.735120 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-14 02:43:40.735131 | orchestrator | Saturday 14 February 2026 02:41:58 +0000 (0:00:34.130) 0:01:51.002 ***** 2026-02-14 02:43:40.735141 | orchestrator | changed: [testbed-manager] 2026-02-14 02:43:40.735152 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:43:40.735163 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:43:40.735174 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:43:40.735185 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:43:40.735195 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:43:40.735206 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:43:40.735217 | orchestrator | 2026-02-14 02:43:40.735227 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-14 02:43:40.735238 | orchestrator | Saturday 14 February 2026 02:43:22 +0000 (0:01:23.602) 0:03:14.605 ***** 2026-02-14 02:43:40.735249 | orchestrator | ok: [testbed-manager] 2026-02-14 02:43:40.735260 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.735271 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.735281 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.735292 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.735302 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.735313 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.735324 | orchestrator | 2026-02-14 02:43:40.735334 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-14 02:43:40.735345 | orchestrator | Saturday 14 February 2026 02:43:24 +0000 (0:00:02.034) 0:03:16.640 ***** 2026-02-14 02:43:40.735356 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:43:40.735366 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:43:40.735377 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:43:40.735387 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:43:40.735398 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:43:40.735409 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:43:40.735419 | orchestrator | changed: [testbed-manager] 2026-02-14 02:43:40.735430 | orchestrator | 2026-02-14 02:43:40.735441 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-14 02:43:40.735452 | orchestrator | Saturday 14 February 2026 02:43:39 +0000 (0:00:14.763) 0:03:31.403 ***** 2026-02-14 02:43:40.735496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-14 02:43:40.735531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-14 02:43:40.735555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-14 02:43:40.735568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-14 02:43:40.735580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-14 02:43:40.735591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-14 02:43:40.735602 | orchestrator | 2026-02-14 02:43:40.735613 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-14 02:43:40.735624 | orchestrator | Saturday 14 February 2026 02:43:39 +0000 (0:00:00.519) 0:03:31.923 ***** 2026-02-14 02:43:40.735635 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-14 02:43:40.735645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-14 02:43:40.735656 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:43:40.735667 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-14 02:43:40.735677 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:43:40.735693 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-14 02:43:40.735704 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:43:40.735715 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:43:40.735725 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 02:43:40.735736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 02:43:40.735746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 02:43:40.735757 | orchestrator | 2026-02-14 02:43:40.735768 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-14 02:43:40.735778 | orchestrator | Saturday 14 February 2026 02:43:40 +0000 (0:00:00.741) 0:03:32.665 ***** 2026-02-14 02:43:40.735789 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-14 02:43:40.735801 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-14 02:43:40.735811 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-14 02:43:40.735822 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-14 02:43:40.735833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-14 02:43:40.735850 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-14 02:43:46.582814 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-14 02:43:46.582990 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-14 02:43:46.583049 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-14 02:43:46.583071 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-14 02:43:46.583090 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-14 02:43:46.583124 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-14 02:43:46.583148 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-14 02:43:46.583158 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-14 02:43:46.583168 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-14 02:43:46.583177 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-14 02:43:46.583187 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-14 02:43:46.583197 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-14 02:43:46.583207 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:43:46.583218 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-14 02:43:46.583228 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-14 02:43:46.583237 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-14 02:43:46.583247 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-14 02:43:46.583257 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-14 02:43:46.583267 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-14 02:43:46.583276 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:43:46.583286 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-14 02:43:46.583299 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-14 02:43:46.583316 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-14 02:43:46.583333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-14 02:43:46.583350 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-14 02:43:46.583368 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-14 02:43:46.583386 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-14 02:43:46.583404 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-14 02:43:46.583421 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:43:46.583439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-14 02:43:46.583457 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-14 02:43:46.583494 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-14 02:43:46.583511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-14 02:43:46.583528 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-14 02:43:46.583545 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-14 02:43:46.583564 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-14 02:43:46.583594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-14 02:43:46.583612 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:43:46.583630 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-14 02:43:46.583648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-14 02:43:46.583666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-14 02:43:46.583683 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-14 02:43:46.583701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-14 02:43:46.583742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-14 02:43:46.583761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-14 02:43:46.583778 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-14 02:43:46.583796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-14 02:43:46.583813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583883 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583901 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-14 02:43:46.583918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-14 02:43:46.583976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-14 02:43:46.583993 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-14 02:43:46.584008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-14 02:43:46.584025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-14 02:43:46.584041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-14 02:43:46.584057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-14 02:43:46.584074 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-14 02:43:46.584090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-14 02:43:46.584107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-14 02:43:46.584123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-14 02:43:46.584140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-14 02:43:46.584156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-14 02:43:46.584173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-14 02:43:46.584189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-14 02:43:46.584216 | orchestrator | 2026-02-14 02:43:46.584235 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-14 02:43:46.584252 | orchestrator | Saturday 14 February 2026 02:43:45 +0000 (0:00:04.797) 0:03:37.462 ***** 2026-02-14 02:43:46.584269 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584285 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584302 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584318 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584375 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-14 02:43:46.584392 | orchestrator | 2026-02-14 02:43:46.584408 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-14 02:43:46.584424 | orchestrator | Saturday 14 February 2026 02:43:46 +0000 (0:00:00.634) 0:03:38.096 ***** 2026-02-14 02:43:46.584441 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:43:46.584457 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:43:46.584472 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:43:46.584487 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:43:46.584503 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:43:46.584517 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:43:46.584533 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:43:46.584549 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:43:46.584567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:43:46.584584 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:43:46.584612 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:44:00.592234 | orchestrator | 2026-02-14 02:44:00.592351 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-14 02:44:00.592368 | orchestrator | Saturday 14 February 2026 02:43:46 +0000 (0:00:00.522) 0:03:38.619 ***** 2026-02-14 02:44:00.592380 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:44:00.592392 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:44:00.592404 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:44:00.592417 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:44:00.592428 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:44:00.592439 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:44:00.592450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-14 02:44:00.592461 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:44:00.592472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:44:00.592483 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:44:00.592494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-14 02:44:00.592505 | orchestrator | 2026-02-14 02:44:00.592517 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-14 02:44:00.592552 | orchestrator | Saturday 14 February 2026 02:43:47 +0000 (0:00:00.641) 0:03:39.260 ***** 2026-02-14 02:44:00.592564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-14 02:44:00.592575 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:44:00.592586 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-14 02:44:00.592597 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-14 02:44:00.592607 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:44:00.592618 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:44:00.592629 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-14 02:44:00.592640 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:44:00.592651 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-14 02:44:00.592661 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-14 02:44:00.592673 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-14 02:44:00.592684 | orchestrator | 2026-02-14 02:44:00.592695 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-14 02:44:00.592706 | orchestrator | Saturday 14 February 2026 02:43:47 +0000 (0:00:00.625) 0:03:39.885 ***** 2026-02-14 02:44:00.592716 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:44:00.592727 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:44:00.592738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:44:00.592749 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:44:00.592760 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:44:00.592770 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:44:00.592784 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:44:00.592796 | orchestrator | 2026-02-14 02:44:00.592809 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-14 02:44:00.592821 | orchestrator | Saturday 14 February 2026 02:43:48 +0000 (0:00:00.352) 0:03:40.238 ***** 2026-02-14 02:44:00.592834 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:44:00.592847 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:44:00.592859 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:44:00.592872 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:44:00.592885 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:44:00.592897 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:44:00.592910 | orchestrator | ok: [testbed-manager] 2026-02-14 02:44:00.592922 | orchestrator | 2026-02-14 02:44:00.592983 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-14 02:44:00.592997 | orchestrator | Saturday 14 February 2026 02:43:54 +0000 (0:00:06.082) 0:03:46.320 ***** 2026-02-14 02:44:00.593010 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-14 02:44:00.593023 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:44:00.593037 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-14 02:44:00.593050 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-14 02:44:00.593062 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:44:00.593073 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-14 02:44:00.593083 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:44:00.593094 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-14 02:44:00.593105 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:44:00.593116 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-14 02:44:00.593145 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:44:00.593156 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:44:00.593167 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-14 02:44:00.593178 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:44:00.593197 | orchestrator | 2026-02-14 02:44:00.593208 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-14 02:44:00.593219 | orchestrator | Saturday 14 February 2026 02:43:54 +0000 (0:00:00.366) 0:03:46.687 ***** 2026-02-14 02:44:00.593230 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-14 02:44:00.593241 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-14 02:44:00.593252 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-14 02:44:00.593281 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-14 02:44:00.593293 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-14 02:44:00.593304 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-14 02:44:00.593315 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-14 02:44:00.593326 | orchestrator | 2026-02-14 02:44:00.593337 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-14 02:44:00.593348 | orchestrator | Saturday 14 February 2026 02:43:55 +0000 (0:00:01.160) 0:03:47.848 ***** 2026-02-14 02:44:00.593361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:44:00.593374 | orchestrator | 2026-02-14 02:44:00.593385 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-14 02:44:00.593396 | orchestrator | Saturday 14 February 2026 02:43:56 +0000 (0:00:00.538) 0:03:48.386 ***** 2026-02-14 02:44:00.593407 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:44:00.593418 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:44:00.593429 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:44:00.593439 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:44:00.593450 | orchestrator | ok: [testbed-manager] 2026-02-14 02:44:00.593461 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:44:00.593472 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:44:00.593482 | orchestrator | 2026-02-14 02:44:00.593494 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-14 02:44:00.593505 | orchestrator | Saturday 14 February 2026 02:43:57 +0000 (0:00:01.297) 0:03:49.684 ***** 2026-02-14 02:44:00.593516 | orchestrator | ok: [testbed-manager] 2026-02-14 02:44:00.593526 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:44:00.593537 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:44:00.593548 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:44:00.593558 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:44:00.593569 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:44:00.593580 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:44:00.593590 | orchestrator | 2026-02-14 02:44:00.593601 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-14 02:44:00.593612 | orchestrator | Saturday 14 February 2026 02:43:58 +0000 (0:00:00.650) 0:03:50.334 ***** 2026-02-14 02:44:00.593623 | orchestrator | changed: [testbed-manager] 2026-02-14 02:44:00.593634 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:44:00.593644 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:44:00.593655 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:44:00.593666 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:44:00.593677 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:44:00.593688 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:44:00.593698 | orchestrator | 2026-02-14 02:44:00.593709 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-14 02:44:00.593720 | orchestrator | Saturday 14 February 2026 02:43:58 +0000 (0:00:00.649) 0:03:50.984 ***** 2026-02-14 02:44:00.593731 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:44:00.593742 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:44:00.593753 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:44:00.593764 | orchestrator | ok: [testbed-manager] 2026-02-14 02:44:00.593775 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:44:00.593785 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:44:00.593796 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:44:00.593807 | orchestrator | 2026-02-14 02:44:00.593818 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-14 02:44:00.593835 | orchestrator | Saturday 14 February 2026 02:43:59 +0000 (0:00:00.648) 0:03:51.633 ***** 2026-02-14 02:44:00.593855 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035599.0135245, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:00.593871 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035618.1436858, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:00.593883 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035612.2100806, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:00.593917 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035624.614399, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531437 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035625.782507, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531542 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035615.7317834, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531559 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771035619.292481, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531597 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531625 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531637 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531648 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531685 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531698 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531709 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 02:44:05.531728 | orchestrator | 2026-02-14 02:44:05.531742 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-14 02:44:05.531754 | orchestrator | Saturday 14 February 2026 02:44:00 +0000 (0:00:00.989) 0:03:52.622 ***** 2026-02-14 02:44:05.531765 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:44:05.531777 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:44:05.531787 | orchestrator | changed: [testbed-manager] 2026-02-14 02:44:05.531798 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:44:05.531809 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:44:05.531819 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:44:05.531830 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:44:05.531841 | orchestrator | 2026-02-14 02:44:05.531852 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-14 02:44:05.531863 | orchestrator | Saturday 14 February 2026 02:44:01 +0000 (0:00:01.081) 0:03:53.704 ***** 2026-02-14 02:44:05.531873 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:44:05.531884 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:44:05.531895 | orchestrator | changed: [testbed-manager] 2026-02-14 02:44:05.531905 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:44:05.531916 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:44:05.531926 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:44:05.531937 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:44:05.532002 | orchestrator | 2026-02-14 02:44:05.532022 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-14 02:44:05.532035 | orchestrator | Saturday 14 February 2026 02:44:02 +0000 (0:00:01.132) 0:03:54.836 ***** 2026-02-14 02:44:05.532048 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:44:05.532060 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:44:05.532072 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:44:05.532085 | orchestrator | changed: [testbed-manager] 2026-02-14 02:44:05.532098 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:44:05.532110 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:44:05.532123 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:44:05.532135 | orchestrator | 2026-02-14 02:44:05.532146 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-14 02:44:05.532157 | orchestrator | Saturday 14 February 2026 02:44:03 +0000 (0:00:01.065) 0:03:55.901 ***** 2026-02-14 02:44:05.532168 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:44:05.532179 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:44:05.532189 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:44:05.532200 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:44:05.532210 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:44:05.532221 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:44:05.532231 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:44:05.532242 | orchestrator | 2026-02-14 02:44:05.532253 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-14 02:44:05.532263 | orchestrator | Saturday 14 February 2026 02:44:04 +0000 (0:00:00.341) 0:03:56.243 ***** 2026-02-14 02:44:05.532274 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:44:05.532285 | orchestrator | ok: [testbed-manager] 2026-02-14 02:44:05.532296 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:44:05.532306 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:44:05.532317 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:44:05.532328 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:44:05.532338 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:44:05.532349 | orchestrator | 2026-02-14 02:44:05.532359 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-14 02:44:05.532370 | orchestrator | Saturday 14 February 2026 02:44:05 +0000 (0:00:00.817) 0:03:57.060 ***** 2026-02-14 02:44:05.532383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:44:05.532403 | orchestrator | 2026-02-14 02:44:05.532414 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-14 02:44:05.532432 | orchestrator | Saturday 14 February 2026 02:44:05 +0000 (0:00:00.506) 0:03:57.566 ***** 2026-02-14 02:45:26.149075 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149184 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:26.149202 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:26.149214 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:26.149225 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:26.149235 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:26.149246 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:26.149257 | orchestrator | 2026-02-14 02:45:26.149281 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-14 02:45:26.149294 | orchestrator | Saturday 14 February 2026 02:44:13 +0000 (0:00:07.735) 0:04:05.301 ***** 2026-02-14 02:45:26.149306 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149317 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149327 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149338 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149349 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149360 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149371 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149381 | orchestrator | 2026-02-14 02:45:26.149393 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-14 02:45:26.149404 | orchestrator | Saturday 14 February 2026 02:44:14 +0000 (0:00:01.402) 0:04:06.704 ***** 2026-02-14 02:45:26.149415 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149426 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149437 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149447 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149458 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149469 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149479 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149490 | orchestrator | 2026-02-14 02:45:26.149501 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-14 02:45:26.149512 | orchestrator | Saturday 14 February 2026 02:44:15 +0000 (0:00:01.215) 0:04:07.919 ***** 2026-02-14 02:45:26.149523 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149534 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149544 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149555 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149568 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149580 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149593 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149605 | orchestrator | 2026-02-14 02:45:26.149618 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-14 02:45:26.149632 | orchestrator | Saturday 14 February 2026 02:44:16 +0000 (0:00:00.357) 0:04:08.277 ***** 2026-02-14 02:45:26.149644 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149655 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149668 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149680 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149692 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149705 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149717 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149729 | orchestrator | 2026-02-14 02:45:26.149742 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-14 02:45:26.149755 | orchestrator | Saturday 14 February 2026 02:44:16 +0000 (0:00:00.363) 0:04:08.640 ***** 2026-02-14 02:45:26.149767 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149779 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149790 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149826 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149838 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149848 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149859 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149870 | orchestrator | 2026-02-14 02:45:26.149881 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-14 02:45:26.149892 | orchestrator | Saturday 14 February 2026 02:44:16 +0000 (0:00:00.380) 0:04:09.021 ***** 2026-02-14 02:45:26.149903 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.149914 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.149925 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.149935 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.149946 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.149956 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.149967 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.149978 | orchestrator | 2026-02-14 02:45:26.149988 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-14 02:45:26.150067 | orchestrator | Saturday 14 February 2026 02:44:22 +0000 (0:00:05.395) 0:04:14.416 ***** 2026-02-14 02:45:26.150085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:45:26.150101 | orchestrator | 2026-02-14 02:45:26.150112 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-14 02:45:26.150123 | orchestrator | Saturday 14 February 2026 02:44:22 +0000 (0:00:00.457) 0:04:14.873 ***** 2026-02-14 02:45:26.150133 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150144 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-14 02:45:26.150155 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:26.150166 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150177 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-14 02:45:26.150206 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150218 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-14 02:45:26.150228 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:26.150239 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150250 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-14 02:45:26.150266 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:26.150286 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150306 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-14 02:45:26.150325 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:26.150343 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150362 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:26.150404 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-14 02:45:26.150424 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:26.150442 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-14 02:45:26.150460 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-14 02:45:26.150477 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:26.150496 | orchestrator | 2026-02-14 02:45:26.150515 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-14 02:45:26.150532 | orchestrator | Saturday 14 February 2026 02:44:23 +0000 (0:00:00.438) 0:04:15.312 ***** 2026-02-14 02:45:26.150553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:45:26.150572 | orchestrator | 2026-02-14 02:45:26.150588 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-14 02:45:26.150613 | orchestrator | Saturday 14 February 2026 02:44:23 +0000 (0:00:00.522) 0:04:15.834 ***** 2026-02-14 02:45:26.150624 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-14 02:45:26.150634 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:26.150646 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-14 02:45:26.150657 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-14 02:45:26.150667 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:26.150678 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-14 02:45:26.150688 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:26.150699 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-14 02:45:26.150709 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:26.150720 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-14 02:45:26.150730 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:26.150741 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:26.150752 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-14 02:45:26.150763 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:26.150774 | orchestrator | 2026-02-14 02:45:26.150784 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-14 02:45:26.150796 | orchestrator | Saturday 14 February 2026 02:44:24 +0000 (0:00:00.396) 0:04:16.231 ***** 2026-02-14 02:45:26.150806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:45:26.150818 | orchestrator | 2026-02-14 02:45:26.150829 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-14 02:45:26.150839 | orchestrator | Saturday 14 February 2026 02:44:24 +0000 (0:00:00.536) 0:04:16.767 ***** 2026-02-14 02:45:26.150850 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:26.150860 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:26.150871 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:26.150882 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:26.150900 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:26.150912 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:26.150923 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:26.150934 | orchestrator | 2026-02-14 02:45:26.150945 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-14 02:45:26.150955 | orchestrator | Saturday 14 February 2026 02:45:01 +0000 (0:00:36.623) 0:04:53.391 ***** 2026-02-14 02:45:26.150966 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:26.150977 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:26.150987 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:26.151021 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:26.151034 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:26.151045 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:26.151056 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:26.151067 | orchestrator | 2026-02-14 02:45:26.151078 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-14 02:45:26.151088 | orchestrator | Saturday 14 February 2026 02:45:08 +0000 (0:00:07.472) 0:05:00.864 ***** 2026-02-14 02:45:26.151099 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:26.151110 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:26.151121 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:26.151132 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:26.151142 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:26.151153 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:26.151164 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:26.151175 | orchestrator | 2026-02-14 02:45:26.151186 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-14 02:45:26.151204 | orchestrator | Saturday 14 February 2026 02:45:17 +0000 (0:00:08.652) 0:05:09.517 ***** 2026-02-14 02:45:26.151215 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:26.151226 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:26.151237 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:26.151247 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:26.151258 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:26.151269 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:26.151279 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:26.151291 | orchestrator | 2026-02-14 02:45:26.151301 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-14 02:45:26.151312 | orchestrator | Saturday 14 February 2026 02:45:19 +0000 (0:00:01.796) 0:05:11.313 ***** 2026-02-14 02:45:26.151323 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:26.151334 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:26.151345 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:26.151355 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:26.151366 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:26.151376 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:26.151387 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:26.151399 | orchestrator | 2026-02-14 02:45:26.151418 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-14 02:45:39.903465 | orchestrator | Saturday 14 February 2026 02:45:26 +0000 (0:00:06.860) 0:05:18.173 ***** 2026-02-14 02:45:39.903579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:45:39.903598 | orchestrator | 2026-02-14 02:45:39.903611 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-14 02:45:39.903623 | orchestrator | Saturday 14 February 2026 02:45:26 +0000 (0:00:00.646) 0:05:18.819 ***** 2026-02-14 02:45:39.903634 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:39.903646 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:39.903656 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:39.903667 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:39.903678 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:39.903689 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:39.903699 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:39.903710 | orchestrator | 2026-02-14 02:45:39.903721 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-14 02:45:39.903732 | orchestrator | Saturday 14 February 2026 02:45:27 +0000 (0:00:00.911) 0:05:19.731 ***** 2026-02-14 02:45:39.903743 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:39.903754 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:39.903765 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:39.903776 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:39.903786 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:39.903797 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:39.903808 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:39.903818 | orchestrator | 2026-02-14 02:45:39.903829 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-14 02:45:39.903840 | orchestrator | Saturday 14 February 2026 02:45:29 +0000 (0:00:01.857) 0:05:21.588 ***** 2026-02-14 02:45:39.903851 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:45:39.903862 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:45:39.903872 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:45:39.903883 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:45:39.903894 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:45:39.903905 | orchestrator | changed: [testbed-manager] 2026-02-14 02:45:39.903916 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:45:39.903927 | orchestrator | 2026-02-14 02:45:39.903938 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-14 02:45:39.903949 | orchestrator | Saturday 14 February 2026 02:45:30 +0000 (0:00:00.901) 0:05:22.490 ***** 2026-02-14 02:45:39.903984 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.903995 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.904088 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.904102 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:39.904114 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:39.904126 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:39.904139 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:39.904151 | orchestrator | 2026-02-14 02:45:39.904164 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-14 02:45:39.904177 | orchestrator | Saturday 14 February 2026 02:45:30 +0000 (0:00:00.417) 0:05:22.907 ***** 2026-02-14 02:45:39.904189 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.904202 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.904214 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.904241 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:39.904252 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:39.904263 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:39.904273 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:39.904284 | orchestrator | 2026-02-14 02:45:39.904294 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-14 02:45:39.904305 | orchestrator | Saturday 14 February 2026 02:45:31 +0000 (0:00:00.571) 0:05:23.479 ***** 2026-02-14 02:45:39.904316 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:39.904327 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:39.904337 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:39.904348 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:39.904359 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:39.904369 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:39.904380 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:39.904390 | orchestrator | 2026-02-14 02:45:39.904401 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-14 02:45:39.904412 | orchestrator | Saturday 14 February 2026 02:45:31 +0000 (0:00:00.434) 0:05:23.913 ***** 2026-02-14 02:45:39.904423 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.904433 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.904444 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.904455 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:39.904465 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:39.904476 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:39.904486 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:39.904497 | orchestrator | 2026-02-14 02:45:39.904508 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-14 02:45:39.904520 | orchestrator | Saturday 14 February 2026 02:45:32 +0000 (0:00:00.433) 0:05:24.347 ***** 2026-02-14 02:45:39.904530 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:39.904541 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:39.904552 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:39.904562 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:39.904573 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:39.904584 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:39.904594 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:39.904605 | orchestrator | 2026-02-14 02:45:39.904616 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-14 02:45:39.904627 | orchestrator | Saturday 14 February 2026 02:45:32 +0000 (0:00:00.408) 0:05:24.755 ***** 2026-02-14 02:45:39.904638 | orchestrator | ok: [testbed-manager] =>  2026-02-14 02:45:39.904648 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904659 | orchestrator | ok: [testbed-node-3] =>  2026-02-14 02:45:39.904670 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904680 | orchestrator | ok: [testbed-node-4] =>  2026-02-14 02:45:39.904691 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904702 | orchestrator | ok: [testbed-node-5] =>  2026-02-14 02:45:39.904712 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904750 | orchestrator | ok: [testbed-node-0] =>  2026-02-14 02:45:39.904762 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904773 | orchestrator | ok: [testbed-node-1] =>  2026-02-14 02:45:39.904783 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904794 | orchestrator | ok: [testbed-node-2] =>  2026-02-14 02:45:39.904804 | orchestrator |  docker_version: 5:27.5.1 2026-02-14 02:45:39.904815 | orchestrator | 2026-02-14 02:45:39.904826 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-14 02:45:39.904836 | orchestrator | Saturday 14 February 2026 02:45:33 +0000 (0:00:00.392) 0:05:25.147 ***** 2026-02-14 02:45:39.904847 | orchestrator | ok: [testbed-manager] =>  2026-02-14 02:45:39.904858 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904868 | orchestrator | ok: [testbed-node-3] =>  2026-02-14 02:45:39.904878 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904889 | orchestrator | ok: [testbed-node-4] =>  2026-02-14 02:45:39.904899 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904910 | orchestrator | ok: [testbed-node-5] =>  2026-02-14 02:45:39.904920 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904931 | orchestrator | ok: [testbed-node-0] =>  2026-02-14 02:45:39.904941 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904952 | orchestrator | ok: [testbed-node-1] =>  2026-02-14 02:45:39.904962 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904973 | orchestrator | ok: [testbed-node-2] =>  2026-02-14 02:45:39.904983 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-14 02:45:39.904994 | orchestrator | 2026-02-14 02:45:39.905027 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-14 02:45:39.905038 | orchestrator | Saturday 14 February 2026 02:45:33 +0000 (0:00:00.382) 0:05:25.530 ***** 2026-02-14 02:45:39.905049 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.905060 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.905071 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.905081 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:39.905092 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:39.905102 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:39.905113 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:39.905124 | orchestrator | 2026-02-14 02:45:39.905134 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-14 02:45:39.905145 | orchestrator | Saturday 14 February 2026 02:45:33 +0000 (0:00:00.313) 0:05:25.843 ***** 2026-02-14 02:45:39.905156 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.905166 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.905177 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.905187 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:45:39.905198 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:45:39.905208 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:45:39.905219 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:45:39.905229 | orchestrator | 2026-02-14 02:45:39.905240 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-14 02:45:39.905251 | orchestrator | Saturday 14 February 2026 02:45:34 +0000 (0:00:00.408) 0:05:26.251 ***** 2026-02-14 02:45:39.905263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:45:39.905276 | orchestrator | 2026-02-14 02:45:39.905292 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-14 02:45:39.905303 | orchestrator | Saturday 14 February 2026 02:45:34 +0000 (0:00:00.573) 0:05:26.825 ***** 2026-02-14 02:45:39.905314 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:39.905325 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:39.905336 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:39.905346 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:39.905357 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:39.905375 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:39.905385 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:39.905396 | orchestrator | 2026-02-14 02:45:39.905407 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-14 02:45:39.905423 | orchestrator | Saturday 14 February 2026 02:45:35 +0000 (0:00:01.202) 0:05:28.028 ***** 2026-02-14 02:45:39.905441 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:45:39.905461 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:45:39.905479 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:45:39.905497 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:45:39.905513 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:45:39.905529 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:45:39.905545 | orchestrator | ok: [testbed-manager] 2026-02-14 02:45:39.905563 | orchestrator | 2026-02-14 02:45:39.905582 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-14 02:45:39.905601 | orchestrator | Saturday 14 February 2026 02:45:39 +0000 (0:00:03.417) 0:05:31.446 ***** 2026-02-14 02:45:39.905618 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-14 02:45:39.905636 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-14 02:45:39.905654 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-14 02:45:39.905672 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-14 02:45:39.905691 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-14 02:45:39.905709 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-14 02:45:39.905726 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:45:39.905746 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-14 02:45:39.905763 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-14 02:45:39.905782 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-14 02:45:39.905800 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:45:39.905819 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-14 02:45:39.905839 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-14 02:45:39.905850 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-14 02:45:39.905861 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:45:39.905871 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-14 02:45:39.905893 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-14 02:46:41.043455 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:46:41.043534 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-14 02:46:41.043541 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-14 02:46:41.043545 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-14 02:46:41.043549 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-14 02:46:41.043553 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:46:41.043558 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:46:41.043562 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-14 02:46:41.043566 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-14 02:46:41.043570 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-14 02:46:41.043574 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:46:41.043578 | orchestrator | 2026-02-14 02:46:41.043583 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-14 02:46:41.043588 | orchestrator | Saturday 14 February 2026 02:45:40 +0000 (0:00:00.737) 0:05:32.184 ***** 2026-02-14 02:46:41.043592 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043596 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043600 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043604 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043608 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043612 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043632 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043636 | orchestrator | 2026-02-14 02:46:41.043640 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-14 02:46:41.043644 | orchestrator | Saturday 14 February 2026 02:45:47 +0000 (0:00:06.891) 0:05:39.075 ***** 2026-02-14 02:46:41.043647 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043651 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043655 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043659 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043662 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043666 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043670 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043674 | orchestrator | 2026-02-14 02:46:41.043677 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-14 02:46:41.043681 | orchestrator | Saturday 14 February 2026 02:45:48 +0000 (0:00:01.266) 0:05:40.342 ***** 2026-02-14 02:46:41.043685 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043688 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043692 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043696 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043700 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043703 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043707 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043711 | orchestrator | 2026-02-14 02:46:41.043715 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-14 02:46:41.043718 | orchestrator | Saturday 14 February 2026 02:45:56 +0000 (0:00:08.202) 0:05:48.544 ***** 2026-02-14 02:46:41.043722 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043726 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043730 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043733 | orchestrator | changed: [testbed-manager] 2026-02-14 02:46:41.043737 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043741 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043745 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043748 | orchestrator | 2026-02-14 02:46:41.043752 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-14 02:46:41.043756 | orchestrator | Saturday 14 February 2026 02:46:00 +0000 (0:00:03.592) 0:05:52.137 ***** 2026-02-14 02:46:41.043760 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043764 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043768 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043772 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043775 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043779 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043783 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043786 | orchestrator | 2026-02-14 02:46:41.043790 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-14 02:46:41.043794 | orchestrator | Saturday 14 February 2026 02:46:01 +0000 (0:00:01.389) 0:05:53.526 ***** 2026-02-14 02:46:41.043798 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043801 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043805 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043809 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043812 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043816 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043820 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043824 | orchestrator | 2026-02-14 02:46:41.043828 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-14 02:46:41.043831 | orchestrator | Saturday 14 February 2026 02:46:03 +0000 (0:00:01.693) 0:05:55.220 ***** 2026-02-14 02:46:41.043835 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:46:41.043839 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:46:41.043843 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:46:41.043846 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:46:41.043853 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:46:41.043857 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:46:41.043861 | orchestrator | changed: [testbed-manager] 2026-02-14 02:46:41.043865 | orchestrator | 2026-02-14 02:46:41.043868 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-14 02:46:41.043872 | orchestrator | Saturday 14 February 2026 02:46:03 +0000 (0:00:00.773) 0:05:55.993 ***** 2026-02-14 02:46:41.043876 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043880 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043883 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043887 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043891 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043894 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043898 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043902 | orchestrator | 2026-02-14 02:46:41.043905 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-14 02:46:41.043919 | orchestrator | Saturday 14 February 2026 02:46:13 +0000 (0:00:09.774) 0:06:05.768 ***** 2026-02-14 02:46:41.043923 | orchestrator | changed: [testbed-manager] 2026-02-14 02:46:41.043927 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043931 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043934 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043938 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043942 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043945 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043949 | orchestrator | 2026-02-14 02:46:41.043953 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-14 02:46:41.043957 | orchestrator | Saturday 14 February 2026 02:46:14 +0000 (0:00:01.033) 0:06:06.802 ***** 2026-02-14 02:46:41.043961 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.043964 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.043968 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.043972 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.043975 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.043979 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.043983 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.043987 | orchestrator | 2026-02-14 02:46:41.043990 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-14 02:46:41.043994 | orchestrator | Saturday 14 February 2026 02:46:23 +0000 (0:00:08.695) 0:06:15.498 ***** 2026-02-14 02:46:41.043998 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.044001 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.044005 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.044009 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.044013 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.044016 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.044020 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.044024 | orchestrator | 2026-02-14 02:46:41.044028 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-14 02:46:41.044032 | orchestrator | Saturday 14 February 2026 02:46:34 +0000 (0:00:10.723) 0:06:26.222 ***** 2026-02-14 02:46:41.044037 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-14 02:46:41.044041 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-14 02:46:41.044046 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-14 02:46:41.044068 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-14 02:46:41.044073 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-14 02:46:41.044077 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-14 02:46:41.044082 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-14 02:46:41.044086 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-14 02:46:41.044090 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-14 02:46:41.044098 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-14 02:46:41.044103 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-14 02:46:41.044137 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-14 02:46:41.044142 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-14 02:46:41.044146 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-14 02:46:41.044150 | orchestrator | 2026-02-14 02:46:41.044155 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-14 02:46:41.044159 | orchestrator | Saturday 14 February 2026 02:46:35 +0000 (0:00:01.243) 0:06:27.465 ***** 2026-02-14 02:46:41.044166 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:46:41.044170 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:46:41.044175 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:46:41.044179 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:46:41.044183 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:46:41.044187 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:46:41.044192 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:46:41.044196 | orchestrator | 2026-02-14 02:46:41.044200 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-14 02:46:41.044204 | orchestrator | Saturday 14 February 2026 02:46:35 +0000 (0:00:00.559) 0:06:28.025 ***** 2026-02-14 02:46:41.044208 | orchestrator | ok: [testbed-manager] 2026-02-14 02:46:41.044213 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:46:41.044217 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:46:41.044222 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:46:41.044226 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:46:41.044230 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:46:41.044234 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:46:41.044239 | orchestrator | 2026-02-14 02:46:41.044243 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-14 02:46:41.044248 | orchestrator | Saturday 14 February 2026 02:46:40 +0000 (0:00:04.032) 0:06:32.058 ***** 2026-02-14 02:46:41.044252 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:46:41.044257 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:46:41.044261 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:46:41.044265 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:46:41.044270 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:46:41.044274 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:46:41.044279 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:46:41.044283 | orchestrator | 2026-02-14 02:46:41.044288 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-14 02:46:41.044292 | orchestrator | Saturday 14 February 2026 02:46:40 +0000 (0:00:00.516) 0:06:32.574 ***** 2026-02-14 02:46:41.044296 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-14 02:46:41.044300 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-14 02:46:41.044304 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:46:41.044308 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-14 02:46:41.044312 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-14 02:46:41.044315 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:46:41.044319 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-14 02:46:41.044323 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-14 02:46:41.044327 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:46:41.044334 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-14 02:47:00.847694 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-14 02:47:00.847782 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:00.847792 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-14 02:47:00.847799 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-14 02:47:00.847806 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:00.847866 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-14 02:47:00.847874 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-14 02:47:00.847880 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:00.847886 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-14 02:47:00.847893 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-14 02:47:00.847899 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:00.847905 | orchestrator | 2026-02-14 02:47:00.847913 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-14 02:47:00.847921 | orchestrator | Saturday 14 February 2026 02:46:41 +0000 (0:00:00.779) 0:06:33.354 ***** 2026-02-14 02:47:00.847927 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:00.847934 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:00.847940 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:00.847946 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:00.847952 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:00.847958 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:00.847964 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:00.847970 | orchestrator | 2026-02-14 02:47:00.847976 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-14 02:47:00.847983 | orchestrator | Saturday 14 February 2026 02:46:41 +0000 (0:00:00.536) 0:06:33.891 ***** 2026-02-14 02:47:00.847989 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:00.847995 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:00.848001 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:00.848007 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:00.848013 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:00.848019 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:00.848025 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:00.848031 | orchestrator | 2026-02-14 02:47:00.848037 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-14 02:47:00.848043 | orchestrator | Saturday 14 February 2026 02:46:42 +0000 (0:00:00.550) 0:06:34.442 ***** 2026-02-14 02:47:00.848049 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:00.848055 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:00.848061 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:00.848125 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:00.848137 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:00.848147 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:00.848157 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:00.848164 | orchestrator | 2026-02-14 02:47:00.848170 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-14 02:47:00.848176 | orchestrator | Saturday 14 February 2026 02:46:42 +0000 (0:00:00.540) 0:06:34.982 ***** 2026-02-14 02:47:00.848182 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848189 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848195 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848201 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848207 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848213 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848219 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848226 | orchestrator | 2026-02-14 02:47:00.848232 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-14 02:47:00.848238 | orchestrator | Saturday 14 February 2026 02:46:44 +0000 (0:00:02.024) 0:06:37.006 ***** 2026-02-14 02:47:00.848246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:00.848254 | orchestrator | 2026-02-14 02:47:00.848261 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-14 02:47:00.848269 | orchestrator | Saturday 14 February 2026 02:46:45 +0000 (0:00:00.877) 0:06:37.884 ***** 2026-02-14 02:47:00.848287 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848294 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:00.848301 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:00.848308 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:00.848315 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:00.848322 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:00.848329 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:00.848336 | orchestrator | 2026-02-14 02:47:00.848343 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-14 02:47:00.848350 | orchestrator | Saturday 14 February 2026 02:46:46 +0000 (0:00:00.858) 0:06:38.742 ***** 2026-02-14 02:47:00.848357 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848365 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:00.848371 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:00.848378 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:00.848385 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:00.848392 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:00.848399 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:00.848406 | orchestrator | 2026-02-14 02:47:00.848413 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-14 02:47:00.848420 | orchestrator | Saturday 14 February 2026 02:46:47 +0000 (0:00:00.847) 0:06:39.590 ***** 2026-02-14 02:47:00.848427 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848434 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:00.848442 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:00.848449 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:00.848456 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:00.848463 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:00.848469 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:00.848476 | orchestrator | 2026-02-14 02:47:00.848483 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-14 02:47:00.848503 | orchestrator | Saturday 14 February 2026 02:46:49 +0000 (0:00:01.646) 0:06:41.236 ***** 2026-02-14 02:47:00.848510 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:00.848517 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848524 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848531 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848538 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848545 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848552 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848559 | orchestrator | 2026-02-14 02:47:00.848566 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-14 02:47:00.848574 | orchestrator | Saturday 14 February 2026 02:46:50 +0000 (0:00:01.425) 0:06:42.662 ***** 2026-02-14 02:47:00.848581 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848588 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:00.848595 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:00.848601 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:00.848609 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:00.848616 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:00.848623 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:00.848630 | orchestrator | 2026-02-14 02:47:00.848638 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-14 02:47:00.848644 | orchestrator | Saturday 14 February 2026 02:46:51 +0000 (0:00:01.331) 0:06:43.994 ***** 2026-02-14 02:47:00.848650 | orchestrator | changed: [testbed-manager] 2026-02-14 02:47:00.848656 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:00.848662 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:00.848668 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:00.848674 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:00.848680 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:00.848686 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:00.848692 | orchestrator | 2026-02-14 02:47:00.848703 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-14 02:47:00.848709 | orchestrator | Saturday 14 February 2026 02:46:53 +0000 (0:00:01.492) 0:06:45.486 ***** 2026-02-14 02:47:00.848716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:00.848722 | orchestrator | 2026-02-14 02:47:00.848728 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-14 02:47:00.848735 | orchestrator | Saturday 14 February 2026 02:46:54 +0000 (0:00:01.119) 0:06:46.605 ***** 2026-02-14 02:47:00.848741 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848747 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848753 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848759 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848765 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848771 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848777 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848783 | orchestrator | 2026-02-14 02:47:00.848790 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-14 02:47:00.848796 | orchestrator | Saturday 14 February 2026 02:46:55 +0000 (0:00:01.361) 0:06:47.967 ***** 2026-02-14 02:47:00.848802 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848808 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848814 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848820 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848826 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848843 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848849 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848856 | orchestrator | 2026-02-14 02:47:00.848862 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-14 02:47:00.848868 | orchestrator | Saturday 14 February 2026 02:46:57 +0000 (0:00:01.141) 0:06:49.108 ***** 2026-02-14 02:47:00.848874 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848880 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848886 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848892 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848898 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848904 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848910 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848916 | orchestrator | 2026-02-14 02:47:00.848923 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-14 02:47:00.848929 | orchestrator | Saturday 14 February 2026 02:46:58 +0000 (0:00:01.152) 0:06:50.261 ***** 2026-02-14 02:47:00.848935 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:00.848941 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:00.848947 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:00.848953 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:00.848959 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:00.848965 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:00.848971 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:00.848977 | orchestrator | 2026-02-14 02:47:00.848983 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-14 02:47:00.848990 | orchestrator | Saturday 14 February 2026 02:46:59 +0000 (0:00:01.394) 0:06:51.655 ***** 2026-02-14 02:47:00.848996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:00.849002 | orchestrator | 2026-02-14 02:47:00.849008 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:00.849014 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.927) 0:06:52.583 ***** 2026-02-14 02:47:00.849020 | orchestrator | 2026-02-14 02:47:00.849027 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:00.849037 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.040) 0:06:52.623 ***** 2026-02-14 02:47:00.849043 | orchestrator | 2026-02-14 02:47:00.849049 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:00.849055 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.039) 0:06:52.662 ***** 2026-02-14 02:47:00.849061 | orchestrator | 2026-02-14 02:47:00.849119 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:00.849132 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.047) 0:06:52.710 ***** 2026-02-14 02:47:26.440241 | orchestrator | 2026-02-14 02:47:26.440359 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:26.440377 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.039) 0:06:52.749 ***** 2026-02-14 02:47:26.440388 | orchestrator | 2026-02-14 02:47:26.440399 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:26.440410 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.038) 0:06:52.788 ***** 2026-02-14 02:47:26.440421 | orchestrator | 2026-02-14 02:47:26.440432 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-14 02:47:26.440442 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.047) 0:06:52.835 ***** 2026-02-14 02:47:26.440453 | orchestrator | 2026-02-14 02:47:26.440464 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-14 02:47:26.440475 | orchestrator | Saturday 14 February 2026 02:47:00 +0000 (0:00:00.040) 0:06:52.876 ***** 2026-02-14 02:47:26.440486 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:26.440497 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:26.440508 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:26.440519 | orchestrator | 2026-02-14 02:47:26.440530 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-14 02:47:26.440540 | orchestrator | Saturday 14 February 2026 02:47:01 +0000 (0:00:01.161) 0:06:54.038 ***** 2026-02-14 02:47:26.440551 | orchestrator | changed: [testbed-manager] 2026-02-14 02:47:26.440563 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:26.440574 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:26.440585 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:26.440595 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:26.440606 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:26.440616 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:26.440627 | orchestrator | 2026-02-14 02:47:26.440638 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-14 02:47:26.440649 | orchestrator | Saturday 14 February 2026 02:47:03 +0000 (0:00:01.537) 0:06:55.575 ***** 2026-02-14 02:47:26.440659 | orchestrator | changed: [testbed-manager] 2026-02-14 02:47:26.440670 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:26.440680 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:26.440691 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:26.440701 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:26.440712 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:26.440722 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:26.440735 | orchestrator | 2026-02-14 02:47:26.440748 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-14 02:47:26.440761 | orchestrator | Saturday 14 February 2026 02:47:04 +0000 (0:00:01.173) 0:06:56.749 ***** 2026-02-14 02:47:26.440772 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:26.440785 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:26.440797 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:26.440809 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:26.440821 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:26.440834 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:26.440846 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:26.440858 | orchestrator | 2026-02-14 02:47:26.440871 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-14 02:47:26.440883 | orchestrator | Saturday 14 February 2026 02:47:06 +0000 (0:00:02.242) 0:06:58.992 ***** 2026-02-14 02:47:26.440937 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:26.440951 | orchestrator | 2026-02-14 02:47:26.440964 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-14 02:47:26.440977 | orchestrator | Saturday 14 February 2026 02:47:07 +0000 (0:00:00.126) 0:06:59.118 ***** 2026-02-14 02:47:26.440989 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.441002 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:26.441015 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:26.441027 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:26.441040 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:26.441052 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:26.441064 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:26.441076 | orchestrator | 2026-02-14 02:47:26.441121 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-14 02:47:26.441140 | orchestrator | Saturday 14 February 2026 02:47:08 +0000 (0:00:01.130) 0:07:00.249 ***** 2026-02-14 02:47:26.441151 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:26.441162 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:26.441172 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:26.441183 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:26.441193 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:26.441204 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:26.441214 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:26.441225 | orchestrator | 2026-02-14 02:47:26.441235 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-14 02:47:26.441246 | orchestrator | Saturday 14 February 2026 02:47:08 +0000 (0:00:00.534) 0:07:00.784 ***** 2026-02-14 02:47:26.441258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:26.441271 | orchestrator | 2026-02-14 02:47:26.441282 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-14 02:47:26.441293 | orchestrator | Saturday 14 February 2026 02:47:09 +0000 (0:00:01.150) 0:07:01.935 ***** 2026-02-14 02:47:26.441303 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.441314 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:26.441324 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:26.441335 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:26.441345 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:26.441356 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:26.441367 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:26.441378 | orchestrator | 2026-02-14 02:47:26.441388 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-14 02:47:26.441399 | orchestrator | Saturday 14 February 2026 02:47:10 +0000 (0:00:00.873) 0:07:02.809 ***** 2026-02-14 02:47:26.441410 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-14 02:47:26.441439 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-14 02:47:26.441450 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-14 02:47:26.441461 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-14 02:47:26.441472 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-14 02:47:26.441482 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-14 02:47:26.441493 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-14 02:47:26.441504 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-14 02:47:26.441514 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-14 02:47:26.441525 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-14 02:47:26.441536 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-14 02:47:26.441546 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-14 02:47:26.441567 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-14 02:47:26.441578 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-14 02:47:26.441588 | orchestrator | 2026-02-14 02:47:26.441599 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-14 02:47:26.441610 | orchestrator | Saturday 14 February 2026 02:47:13 +0000 (0:00:02.343) 0:07:05.152 ***** 2026-02-14 02:47:26.441621 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:26.441631 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:26.441642 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:26.441652 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:26.441663 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:26.441674 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:26.441684 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:26.441695 | orchestrator | 2026-02-14 02:47:26.441706 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-14 02:47:26.441716 | orchestrator | Saturday 14 February 2026 02:47:13 +0000 (0:00:00.752) 0:07:05.905 ***** 2026-02-14 02:47:26.441729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:26.441742 | orchestrator | 2026-02-14 02:47:26.441753 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-14 02:47:26.441763 | orchestrator | Saturday 14 February 2026 02:47:14 +0000 (0:00:00.849) 0:07:06.755 ***** 2026-02-14 02:47:26.441774 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.441785 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:26.441796 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:26.441806 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:26.441817 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:26.441828 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:26.441838 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:26.441849 | orchestrator | 2026-02-14 02:47:26.441860 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-14 02:47:26.441870 | orchestrator | Saturday 14 February 2026 02:47:15 +0000 (0:00:00.879) 0:07:07.635 ***** 2026-02-14 02:47:26.441887 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.441898 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:26.441909 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:26.441919 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:26.441930 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:26.441940 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:26.441951 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:26.441961 | orchestrator | 2026-02-14 02:47:26.441972 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-14 02:47:26.441983 | orchestrator | Saturday 14 February 2026 02:47:16 +0000 (0:00:01.085) 0:07:08.720 ***** 2026-02-14 02:47:26.441994 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:26.442004 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:26.442073 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:26.442127 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:26.442140 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:26.442151 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:26.442161 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:26.442172 | orchestrator | 2026-02-14 02:47:26.442183 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-14 02:47:26.442193 | orchestrator | Saturday 14 February 2026 02:47:17 +0000 (0:00:00.517) 0:07:09.238 ***** 2026-02-14 02:47:26.442204 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.442215 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:26.442225 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:26.442236 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:26.442247 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:26.442265 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:26.442276 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:26.442287 | orchestrator | 2026-02-14 02:47:26.442297 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-14 02:47:26.442308 | orchestrator | Saturday 14 February 2026 02:47:18 +0000 (0:00:01.485) 0:07:10.723 ***** 2026-02-14 02:47:26.442319 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:26.442330 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:26.442341 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:26.442352 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:26.442362 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:26.442373 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:26.442383 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:26.442394 | orchestrator | 2026-02-14 02:47:26.442404 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-14 02:47:26.442415 | orchestrator | Saturday 14 February 2026 02:47:19 +0000 (0:00:00.531) 0:07:11.255 ***** 2026-02-14 02:47:26.442426 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:26.442437 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:26.442447 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:26.442458 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:26.442468 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:26.442479 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:26.442498 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:59.512636 | orchestrator | 2026-02-14 02:47:59.512739 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-14 02:47:59.512750 | orchestrator | Saturday 14 February 2026 02:47:26 +0000 (0:00:07.212) 0:07:18.467 ***** 2026-02-14 02:47:59.512757 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.512765 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:59.512771 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:59.512777 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:59.512783 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:59.512789 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:59.512795 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:59.512801 | orchestrator | 2026-02-14 02:47:59.512808 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-14 02:47:59.512814 | orchestrator | Saturday 14 February 2026 02:47:28 +0000 (0:00:01.592) 0:07:20.059 ***** 2026-02-14 02:47:59.512820 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.512826 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:59.512832 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:59.512838 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:59.512844 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:59.512851 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:59.512856 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:59.512862 | orchestrator | 2026-02-14 02:47:59.512868 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-14 02:47:59.512874 | orchestrator | Saturday 14 February 2026 02:47:29 +0000 (0:00:01.694) 0:07:21.754 ***** 2026-02-14 02:47:59.512880 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.512886 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:59.512891 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:59.512898 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:59.512904 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:59.512910 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:59.512916 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:59.512923 | orchestrator | 2026-02-14 02:47:59.512928 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-14 02:47:59.512935 | orchestrator | Saturday 14 February 2026 02:47:31 +0000 (0:00:01.719) 0:07:23.473 ***** 2026-02-14 02:47:59.512941 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.512947 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.512953 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.512980 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.512986 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.512992 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.512998 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513003 | orchestrator | 2026-02-14 02:47:59.513009 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-14 02:47:59.513015 | orchestrator | Saturday 14 February 2026 02:47:32 +0000 (0:00:00.873) 0:07:24.347 ***** 2026-02-14 02:47:59.513022 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:59.513028 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:59.513034 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:59.513040 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:59.513046 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:59.513053 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:59.513059 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:59.513064 | orchestrator | 2026-02-14 02:47:59.513070 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-14 02:47:59.513077 | orchestrator | Saturday 14 February 2026 02:47:33 +0000 (0:00:01.096) 0:07:25.443 ***** 2026-02-14 02:47:59.513083 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:59.513089 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:59.513095 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:59.513100 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:59.513106 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:59.513179 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:59.513186 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:59.513193 | orchestrator | 2026-02-14 02:47:59.513201 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-14 02:47:59.513208 | orchestrator | Saturday 14 February 2026 02:47:33 +0000 (0:00:00.548) 0:07:25.991 ***** 2026-02-14 02:47:59.513213 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513236 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513242 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513248 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513254 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513260 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513266 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513272 | orchestrator | 2026-02-14 02:47:59.513278 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-14 02:47:59.513285 | orchestrator | Saturday 14 February 2026 02:47:34 +0000 (0:00:00.561) 0:07:26.553 ***** 2026-02-14 02:47:59.513291 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513297 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513303 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513310 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513316 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513322 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513328 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513334 | orchestrator | 2026-02-14 02:47:59.513341 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-14 02:47:59.513347 | orchestrator | Saturday 14 February 2026 02:47:35 +0000 (0:00:00.586) 0:07:27.139 ***** 2026-02-14 02:47:59.513353 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513359 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513365 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513371 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513377 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513384 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513390 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513396 | orchestrator | 2026-02-14 02:47:59.513402 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-14 02:47:59.513409 | orchestrator | Saturday 14 February 2026 02:47:35 +0000 (0:00:00.790) 0:07:27.929 ***** 2026-02-14 02:47:59.513415 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513421 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513435 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513441 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513447 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513454 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513460 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513466 | orchestrator | 2026-02-14 02:47:59.513490 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-14 02:47:59.513497 | orchestrator | Saturday 14 February 2026 02:47:41 +0000 (0:00:05.521) 0:07:33.450 ***** 2026-02-14 02:47:59.513503 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:47:59.513510 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:47:59.513516 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:47:59.513522 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:47:59.513528 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:47:59.513534 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:47:59.513540 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:47:59.513545 | orchestrator | 2026-02-14 02:47:59.513551 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-14 02:47:59.513557 | orchestrator | Saturday 14 February 2026 02:47:41 +0000 (0:00:00.582) 0:07:34.033 ***** 2026-02-14 02:47:59.513565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:59.513572 | orchestrator | 2026-02-14 02:47:59.513578 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-14 02:47:59.513585 | orchestrator | Saturday 14 February 2026 02:47:43 +0000 (0:00:01.185) 0:07:35.218 ***** 2026-02-14 02:47:59.513591 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513597 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513602 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513608 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513614 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513620 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513626 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513631 | orchestrator | 2026-02-14 02:47:59.513637 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-14 02:47:59.513643 | orchestrator | Saturday 14 February 2026 02:47:45 +0000 (0:00:02.438) 0:07:37.656 ***** 2026-02-14 02:47:59.513649 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513655 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513661 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513667 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513673 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513678 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513684 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513690 | orchestrator | 2026-02-14 02:47:59.513696 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-14 02:47:59.513702 | orchestrator | Saturday 14 February 2026 02:47:46 +0000 (0:00:01.148) 0:07:38.805 ***** 2026-02-14 02:47:59.513708 | orchestrator | ok: [testbed-manager] 2026-02-14 02:47:59.513714 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:47:59.513721 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:47:59.513726 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:47:59.513732 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:47:59.513738 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:47:59.513744 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:47:59.513749 | orchestrator | 2026-02-14 02:47:59.513755 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-14 02:47:59.513761 | orchestrator | Saturday 14 February 2026 02:47:47 +0000 (0:00:00.862) 0:07:39.668 ***** 2026-02-14 02:47:59.513772 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513780 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513793 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513799 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513805 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513811 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513817 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-14 02:47:59.513823 | orchestrator | 2026-02-14 02:47:59.513829 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-14 02:47:59.513835 | orchestrator | Saturday 14 February 2026 02:47:49 +0000 (0:00:02.009) 0:07:41.678 ***** 2026-02-14 02:47:59.513841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:47:59.513848 | orchestrator | 2026-02-14 02:47:59.513853 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-14 02:47:59.513859 | orchestrator | Saturday 14 February 2026 02:47:50 +0000 (0:00:00.846) 0:07:42.524 ***** 2026-02-14 02:47:59.513865 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:47:59.513870 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:47:59.513876 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:47:59.513883 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:47:59.513889 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:47:59.513895 | orchestrator | changed: [testbed-manager] 2026-02-14 02:47:59.513901 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:47:59.513907 | orchestrator | 2026-02-14 02:47:59.513919 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-14 02:48:32.326623 | orchestrator | Saturday 14 February 2026 02:47:59 +0000 (0:00:09.022) 0:07:51.547 ***** 2026-02-14 02:48:32.326758 | orchestrator | ok: [testbed-manager] 2026-02-14 02:48:32.326776 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:48:32.326788 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:48:32.326799 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:48:32.326810 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:48:32.326820 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:48:32.326832 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:48:32.326843 | orchestrator | 2026-02-14 02:48:32.326856 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-14 02:48:32.326930 | orchestrator | Saturday 14 February 2026 02:48:01 +0000 (0:00:02.044) 0:07:53.591 ***** 2026-02-14 02:48:32.326944 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:48:32.326955 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:48:32.326966 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:48:32.326977 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:48:32.326988 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:48:32.326998 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:48:32.327009 | orchestrator | 2026-02-14 02:48:32.327020 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-14 02:48:32.327031 | orchestrator | Saturday 14 February 2026 02:48:02 +0000 (0:00:01.326) 0:07:54.918 ***** 2026-02-14 02:48:32.327042 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.327054 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.327065 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.327076 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.327087 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.327123 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.327135 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.327167 | orchestrator | 2026-02-14 02:48:32.327180 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-14 02:48:32.327193 | orchestrator | 2026-02-14 02:48:32.327205 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-14 02:48:32.327218 | orchestrator | Saturday 14 February 2026 02:48:04 +0000 (0:00:01.223) 0:07:56.142 ***** 2026-02-14 02:48:32.327230 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:48:32.327242 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:48:32.327254 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:48:32.327266 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:48:32.327279 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:48:32.327291 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:48:32.327303 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:48:32.327315 | orchestrator | 2026-02-14 02:48:32.327328 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-14 02:48:32.327340 | orchestrator | 2026-02-14 02:48:32.327353 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-14 02:48:32.327365 | orchestrator | Saturday 14 February 2026 02:48:04 +0000 (0:00:00.839) 0:07:56.981 ***** 2026-02-14 02:48:32.327378 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.327390 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.327403 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.327415 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.327427 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.327439 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.327451 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.327464 | orchestrator | 2026-02-14 02:48:32.327476 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-14 02:48:32.327507 | orchestrator | Saturday 14 February 2026 02:48:06 +0000 (0:00:01.323) 0:07:58.304 ***** 2026-02-14 02:48:32.327527 | orchestrator | ok: [testbed-manager] 2026-02-14 02:48:32.327547 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:48:32.327565 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:48:32.327584 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:48:32.327605 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:48:32.327617 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:48:32.327628 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:48:32.327639 | orchestrator | 2026-02-14 02:48:32.327650 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-14 02:48:32.327661 | orchestrator | Saturday 14 February 2026 02:48:07 +0000 (0:00:01.607) 0:07:59.912 ***** 2026-02-14 02:48:32.327672 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:48:32.327682 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:48:32.327693 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:48:32.327704 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:48:32.327714 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:48:32.327725 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:48:32.327735 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:48:32.327746 | orchestrator | 2026-02-14 02:48:32.327757 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-14 02:48:32.327767 | orchestrator | Saturday 14 February 2026 02:48:08 +0000 (0:00:00.500) 0:08:00.412 ***** 2026-02-14 02:48:32.327779 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:48:32.327792 | orchestrator | 2026-02-14 02:48:32.327803 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-14 02:48:32.327814 | orchestrator | Saturday 14 February 2026 02:48:09 +0000 (0:00:01.069) 0:08:01.482 ***** 2026-02-14 02:48:32.327827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:48:32.327851 | orchestrator | 2026-02-14 02:48:32.327862 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-14 02:48:32.327873 | orchestrator | Saturday 14 February 2026 02:48:10 +0000 (0:00:00.809) 0:08:02.291 ***** 2026-02-14 02:48:32.327884 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.327894 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.327905 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.327915 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.327926 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.327937 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.327948 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.327958 | orchestrator | 2026-02-14 02:48:32.327990 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-14 02:48:32.328002 | orchestrator | Saturday 14 February 2026 02:48:18 +0000 (0:00:08.593) 0:08:10.884 ***** 2026-02-14 02:48:32.328012 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328023 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328034 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328044 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328055 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328066 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328076 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328087 | orchestrator | 2026-02-14 02:48:32.328098 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-14 02:48:32.328109 | orchestrator | Saturday 14 February 2026 02:48:20 +0000 (0:00:01.176) 0:08:12.061 ***** 2026-02-14 02:48:32.328119 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328130 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328205 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328226 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328244 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328255 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328266 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328276 | orchestrator | 2026-02-14 02:48:32.328287 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-14 02:48:32.328298 | orchestrator | Saturday 14 February 2026 02:48:21 +0000 (0:00:01.380) 0:08:13.442 ***** 2026-02-14 02:48:32.328309 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328319 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328330 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328341 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328351 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328362 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328373 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328383 | orchestrator | 2026-02-14 02:48:32.328394 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-14 02:48:32.328405 | orchestrator | Saturday 14 February 2026 02:48:23 +0000 (0:00:02.089) 0:08:15.531 ***** 2026-02-14 02:48:32.328415 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328426 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328436 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328447 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328458 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328469 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328479 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328490 | orchestrator | 2026-02-14 02:48:32.328501 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-14 02:48:32.328512 | orchestrator | Saturday 14 February 2026 02:48:24 +0000 (0:00:01.311) 0:08:16.843 ***** 2026-02-14 02:48:32.328522 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328533 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328553 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328564 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328574 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328585 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328595 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328606 | orchestrator | 2026-02-14 02:48:32.328617 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-14 02:48:32.328627 | orchestrator | 2026-02-14 02:48:32.328646 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-14 02:48:32.328657 | orchestrator | Saturday 14 February 2026 02:48:26 +0000 (0:00:01.853) 0:08:18.697 ***** 2026-02-14 02:48:32.328668 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:48:32.328678 | orchestrator | 2026-02-14 02:48:32.328689 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-14 02:48:32.328699 | orchestrator | Saturday 14 February 2026 02:48:27 +0000 (0:00:00.971) 0:08:19.669 ***** 2026-02-14 02:48:32.328710 | orchestrator | ok: [testbed-manager] 2026-02-14 02:48:32.328721 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:48:32.328731 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:48:32.328742 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:48:32.328753 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:48:32.328763 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:48:32.328774 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:48:32.328784 | orchestrator | 2026-02-14 02:48:32.328795 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-14 02:48:32.328806 | orchestrator | Saturday 14 February 2026 02:48:28 +0000 (0:00:01.250) 0:08:20.920 ***** 2026-02-14 02:48:32.328817 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:32.328827 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:32.328838 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:32.328849 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:32.328859 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:32.328870 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:32.328880 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:32.328891 | orchestrator | 2026-02-14 02:48:32.328902 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-14 02:48:32.328912 | orchestrator | Saturday 14 February 2026 02:48:30 +0000 (0:00:01.353) 0:08:22.273 ***** 2026-02-14 02:48:32.328923 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:48:32.328934 | orchestrator | 2026-02-14 02:48:32.328945 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-14 02:48:32.328956 | orchestrator | Saturday 14 February 2026 02:48:31 +0000 (0:00:01.142) 0:08:23.416 ***** 2026-02-14 02:48:32.328966 | orchestrator | ok: [testbed-manager] 2026-02-14 02:48:32.328977 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:48:32.328988 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:48:32.328998 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:48:32.329009 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:48:32.329020 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:48:32.329030 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:48:32.329041 | orchestrator | 2026-02-14 02:48:32.329060 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-14 02:48:34.168490 | orchestrator | Saturday 14 February 2026 02:48:32 +0000 (0:00:00.929) 0:08:24.345 ***** 2026-02-14 02:48:34.168618 | orchestrator | changed: [testbed-manager] 2026-02-14 02:48:34.168633 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:48:34.168644 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:48:34.168654 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:48:34.168664 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:48:34.168674 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:48:34.168683 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:48:34.168721 | orchestrator | 2026-02-14 02:48:34.168732 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:48:34.168744 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-14 02:48:34.168755 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-14 02:48:34.168765 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-14 02:48:34.168774 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-14 02:48:34.168784 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-14 02:48:34.168794 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-14 02:48:34.168803 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-14 02:48:34.168812 | orchestrator | 2026-02-14 02:48:34.168822 | orchestrator | 2026-02-14 02:48:34.168832 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:48:34.168842 | orchestrator | Saturday 14 February 2026 02:48:33 +0000 (0:00:01.222) 0:08:25.567 ***** 2026-02-14 02:48:34.168852 | orchestrator | =============================================================================== 2026-02-14 02:48:34.168861 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.60s 2026-02-14 02:48:34.168871 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.62s 2026-02-14 02:48:34.168880 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.13s 2026-02-14 02:48:34.168890 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.97s 2026-02-14 02:48:34.168899 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.76s 2026-02-14 02:48:34.168925 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.97s 2026-02-14 02:48:34.168935 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.72s 2026-02-14 02:48:34.168945 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.77s 2026-02-14 02:48:34.168955 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.02s 2026-02-14 02:48:34.168965 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.70s 2026-02-14 02:48:34.168974 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.65s 2026-02-14 02:48:34.168983 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.59s 2026-02-14 02:48:34.168993 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.20s 2026-02-14 02:48:34.169002 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.73s 2026-02-14 02:48:34.169012 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.47s 2026-02-14 02:48:34.169021 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.21s 2026-02-14 02:48:34.169031 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.89s 2026-02-14 02:48:34.169043 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.86s 2026-02-14 02:48:34.169053 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.08s 2026-02-14 02:48:34.169064 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.52s 2026-02-14 02:48:34.596842 | orchestrator | + osism apply fail2ban 2026-02-14 02:48:47.684100 | orchestrator | 2026-02-14 02:48:47 | INFO  | Task 6fc676b5-e071-4ebc-a633-ed234ccad735 (fail2ban) was prepared for execution. 2026-02-14 02:48:47.684291 | orchestrator | 2026-02-14 02:48:47 | INFO  | It takes a moment until task 6fc676b5-e071-4ebc-a633-ed234ccad735 (fail2ban) has been started and output is visible here. 2026-02-14 02:49:08.858105 | orchestrator | 2026-02-14 02:49:08.858254 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-14 02:49:08.858274 | orchestrator | 2026-02-14 02:49:08.858288 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-14 02:49:08.858299 | orchestrator | Saturday 14 February 2026 02:48:52 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-02-14 02:49:08.858313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:49:08.858326 | orchestrator | 2026-02-14 02:49:08.858338 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-14 02:49:08.858349 | orchestrator | Saturday 14 February 2026 02:48:53 +0000 (0:00:01.164) 0:00:01.446 ***** 2026-02-14 02:49:08.858360 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:49:08.858372 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:49:08.858383 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:49:08.858394 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:49:08.858405 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:49:08.858416 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:49:08.858427 | orchestrator | changed: [testbed-manager] 2026-02-14 02:49:08.858438 | orchestrator | 2026-02-14 02:49:08.858466 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-14 02:49:08.858489 | orchestrator | Saturday 14 February 2026 02:49:03 +0000 (0:00:10.298) 0:00:11.744 ***** 2026-02-14 02:49:08.858500 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:49:08.858511 | orchestrator | changed: [testbed-manager] 2026-02-14 02:49:08.858522 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:49:08.858533 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:49:08.858544 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:49:08.858555 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:49:08.858566 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:49:08.858577 | orchestrator | 2026-02-14 02:49:08.858590 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-14 02:49:08.858602 | orchestrator | Saturday 14 February 2026 02:49:05 +0000 (0:00:01.435) 0:00:13.180 ***** 2026-02-14 02:49:08.858616 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:08.858630 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:08.858642 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:08.858668 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:08.858691 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:08.858704 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:08.858716 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:08.858729 | orchestrator | 2026-02-14 02:49:08.858741 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-14 02:49:08.858754 | orchestrator | Saturday 14 February 2026 02:49:06 +0000 (0:00:01.439) 0:00:14.619 ***** 2026-02-14 02:49:08.858767 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:49:08.858780 | orchestrator | changed: [testbed-manager] 2026-02-14 02:49:08.858792 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:49:08.858805 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:49:08.858818 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:49:08.858830 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:49:08.858843 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:49:08.858856 | orchestrator | 2026-02-14 02:49:08.858869 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:49:08.858882 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858928 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858942 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858956 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858969 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858982 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.858995 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:49:08.859008 | orchestrator | 2026-02-14 02:49:08.859019 | orchestrator | 2026-02-14 02:49:08.859030 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:49:08.859041 | orchestrator | Saturday 14 February 2026 02:49:08 +0000 (0:00:01.549) 0:00:16.169 ***** 2026-02-14 02:49:08.859052 | orchestrator | =============================================================================== 2026-02-14 02:49:08.859063 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.30s 2026-02-14 02:49:08.859074 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.55s 2026-02-14 02:49:08.859084 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-02-14 02:49:08.859095 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.44s 2026-02-14 02:49:08.859106 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-02-14 02:49:09.182990 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-14 02:49:09.183067 | orchestrator | + osism apply network 2026-02-14 02:49:21.177681 | orchestrator | 2026-02-14 02:49:21 | INFO  | Task 78fd2a16-7501-499a-ab71-afef4a46e0e8 (network) was prepared for execution. 2026-02-14 02:49:21.177816 | orchestrator | 2026-02-14 02:49:21 | INFO  | It takes a moment until task 78fd2a16-7501-499a-ab71-afef4a46e0e8 (network) has been started and output is visible here. 2026-02-14 02:49:49.377639 | orchestrator | 2026-02-14 02:49:49.377742 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-14 02:49:49.377755 | orchestrator | 2026-02-14 02:49:49.377765 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-14 02:49:49.377773 | orchestrator | Saturday 14 February 2026 02:49:25 +0000 (0:00:00.239) 0:00:00.239 ***** 2026-02-14 02:49:49.377782 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.377791 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.377799 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.377807 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.377815 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.377823 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.377831 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.377838 | orchestrator | 2026-02-14 02:49:49.377846 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-14 02:49:49.377854 | orchestrator | Saturday 14 February 2026 02:49:25 +0000 (0:00:00.527) 0:00:00.767 ***** 2026-02-14 02:49:49.377866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:49:49.377882 | orchestrator | 2026-02-14 02:49:49.377895 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-14 02:49:49.377940 | orchestrator | Saturday 14 February 2026 02:49:26 +0000 (0:00:00.936) 0:00:01.703 ***** 2026-02-14 02:49:49.377956 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.377970 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.377982 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.377996 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.378005 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.378065 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.378075 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.378083 | orchestrator | 2026-02-14 02:49:49.378091 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-14 02:49:49.378098 | orchestrator | Saturday 14 February 2026 02:49:28 +0000 (0:00:01.943) 0:00:03.646 ***** 2026-02-14 02:49:49.378106 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.378114 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.378122 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.378130 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.378138 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.378145 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.378154 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.378163 | orchestrator | 2026-02-14 02:49:49.378172 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-14 02:49:49.378181 | orchestrator | Saturday 14 February 2026 02:49:30 +0000 (0:00:01.610) 0:00:05.256 ***** 2026-02-14 02:49:49.378196 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-14 02:49:49.378231 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-14 02:49:49.378246 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-14 02:49:49.378260 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-14 02:49:49.378272 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-14 02:49:49.378286 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-14 02:49:49.378298 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-14 02:49:49.378311 | orchestrator | 2026-02-14 02:49:49.378343 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-14 02:49:49.378361 | orchestrator | Saturday 14 February 2026 02:49:31 +0000 (0:00:00.981) 0:00:06.238 ***** 2026-02-14 02:49:49.378374 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 02:49:49.378387 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 02:49:49.378399 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 02:49:49.378411 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 02:49:49.378422 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 02:49:49.378436 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 02:49:49.378448 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 02:49:49.378460 | orchestrator | 2026-02-14 02:49:49.378474 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-14 02:49:49.378487 | orchestrator | Saturday 14 February 2026 02:49:34 +0000 (0:00:03.284) 0:00:09.522 ***** 2026-02-14 02:49:49.378499 | orchestrator | changed: [testbed-manager] 2026-02-14 02:49:49.378513 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:49:49.378526 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:49:49.378539 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:49:49.378551 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:49:49.378564 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:49:49.378576 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:49:49.378589 | orchestrator | 2026-02-14 02:49:49.378601 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-14 02:49:49.378614 | orchestrator | Saturday 14 February 2026 02:49:36 +0000 (0:00:01.633) 0:00:11.155 ***** 2026-02-14 02:49:49.378626 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 02:49:49.378639 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 02:49:49.378651 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 02:49:49.378663 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 02:49:49.378691 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 02:49:49.378705 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 02:49:49.378718 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 02:49:49.378730 | orchestrator | 2026-02-14 02:49:49.378743 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-14 02:49:49.378756 | orchestrator | Saturday 14 February 2026 02:49:38 +0000 (0:00:02.083) 0:00:13.238 ***** 2026-02-14 02:49:49.378768 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.378781 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.378793 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.378806 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.378820 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.378832 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.378845 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.378857 | orchestrator | 2026-02-14 02:49:49.378870 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-14 02:49:49.378909 | orchestrator | Saturday 14 February 2026 02:49:39 +0000 (0:00:01.143) 0:00:14.382 ***** 2026-02-14 02:49:49.378922 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:49:49.378934 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:49:49.378946 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:49:49.378958 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:49:49.378971 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:49:49.378984 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:49:49.378996 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:49:49.379009 | orchestrator | 2026-02-14 02:49:49.379023 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-14 02:49:49.379036 | orchestrator | Saturday 14 February 2026 02:49:40 +0000 (0:00:00.645) 0:00:15.027 ***** 2026-02-14 02:49:49.379049 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.379062 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.379075 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.379087 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.379099 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.379110 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.379123 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.379137 | orchestrator | 2026-02-14 02:49:49.379150 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-14 02:49:49.379163 | orchestrator | Saturday 14 February 2026 02:49:42 +0000 (0:00:02.123) 0:00:17.151 ***** 2026-02-14 02:49:49.379176 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:49:49.379189 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:49:49.379202 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:49:49.379247 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:49:49.379262 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:49:49.379275 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:49:49.379291 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-14 02:49:49.379306 | orchestrator | 2026-02-14 02:49:49.379320 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-14 02:49:49.379333 | orchestrator | Saturday 14 February 2026 02:49:43 +0000 (0:00:00.960) 0:00:18.112 ***** 2026-02-14 02:49:49.379346 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.379357 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:49:49.379365 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:49:49.379373 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:49:49.379380 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:49:49.379388 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:49:49.379396 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:49:49.379403 | orchestrator | 2026-02-14 02:49:49.379411 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-14 02:49:49.379419 | orchestrator | Saturday 14 February 2026 02:49:44 +0000 (0:00:01.648) 0:00:19.760 ***** 2026-02-14 02:49:49.379428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:49:49.379449 | orchestrator | 2026-02-14 02:49:49.379457 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-14 02:49:49.379465 | orchestrator | Saturday 14 February 2026 02:49:46 +0000 (0:00:01.292) 0:00:21.053 ***** 2026-02-14 02:49:49.379473 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.379480 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.379488 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.379496 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.379512 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.379520 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.379528 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.379535 | orchestrator | 2026-02-14 02:49:49.379543 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-14 02:49:49.379551 | orchestrator | Saturday 14 February 2026 02:49:47 +0000 (0:00:01.151) 0:00:22.204 ***** 2026-02-14 02:49:49.379559 | orchestrator | ok: [testbed-manager] 2026-02-14 02:49:49.379566 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:49:49.379574 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:49:49.379581 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:49:49.379589 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:49:49.379597 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:49:49.379604 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:49:49.379612 | orchestrator | 2026-02-14 02:49:49.379620 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-14 02:49:49.379628 | orchestrator | Saturday 14 February 2026 02:49:48 +0000 (0:00:00.673) 0:00:22.878 ***** 2026-02-14 02:49:49.379636 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379644 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379652 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379660 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379667 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379675 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379683 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379690 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379698 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379706 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379713 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379721 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379729 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-14 02:49:49.379737 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-14 02:49:49.379744 | orchestrator | 2026-02-14 02:49:49.379761 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-14 02:50:05.852754 | orchestrator | Saturday 14 February 2026 02:49:49 +0000 (0:00:01.275) 0:00:24.154 ***** 2026-02-14 02:50:05.852850 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:50:05.852866 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:50:05.852877 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:50:05.852888 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:50:05.852899 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:50:05.852911 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:50:05.852931 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:50:05.852942 | orchestrator | 2026-02-14 02:50:05.852973 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-14 02:50:05.852993 | orchestrator | Saturday 14 February 2026 02:49:50 +0000 (0:00:00.648) 0:00:24.802 ***** 2026-02-14 02:50:05.853007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-14 02:50:05.853020 | orchestrator | 2026-02-14 02:50:05.853031 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-14 02:50:05.853042 | orchestrator | Saturday 14 February 2026 02:49:54 +0000 (0:00:04.616) 0:00:29.419 ***** 2026-02-14 02:50:05.853055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853067 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853090 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853312 | orchestrator | 2026-02-14 02:50:05.853332 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-14 02:50:05.853346 | orchestrator | Saturday 14 February 2026 02:50:00 +0000 (0:00:05.579) 0:00:34.998 ***** 2026-02-14 02:50:05.853361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853380 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-14 02:50:05.853488 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:05.853552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:11.376377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-14 02:50:11.376461 | orchestrator | 2026-02-14 02:50:11.376470 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-14 02:50:11.376477 | orchestrator | Saturday 14 February 2026 02:50:05 +0000 (0:00:05.632) 0:00:40.630 ***** 2026-02-14 02:50:11.376485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:50:11.376491 | orchestrator | 2026-02-14 02:50:11.376497 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-14 02:50:11.376503 | orchestrator | Saturday 14 February 2026 02:50:07 +0000 (0:00:01.182) 0:00:41.813 ***** 2026-02-14 02:50:11.376509 | orchestrator | ok: [testbed-manager] 2026-02-14 02:50:11.376515 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:50:11.376520 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:50:11.376526 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:50:11.376531 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:50:11.376536 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:50:11.376542 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:50:11.376547 | orchestrator | 2026-02-14 02:50:11.376553 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-14 02:50:11.376558 | orchestrator | Saturday 14 February 2026 02:50:08 +0000 (0:00:01.188) 0:00:43.002 ***** 2026-02-14 02:50:11.376564 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376570 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376575 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376581 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376586 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376592 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376597 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376603 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376608 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:50:11.376614 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376631 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376637 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:50:11.376642 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376648 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376668 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376673 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376679 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376684 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:50:11.376690 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376695 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376701 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376706 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376711 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376717 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:50:11.376722 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376728 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376733 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376738 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376744 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:50:11.376749 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:50:11.376754 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-14 02:50:11.376760 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-14 02:50:11.376765 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-14 02:50:11.376770 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-14 02:50:11.376776 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:50:11.376781 | orchestrator | 2026-02-14 02:50:11.376787 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-14 02:50:11.376803 | orchestrator | Saturday 14 February 2026 02:50:09 +0000 (0:00:01.727) 0:00:44.729 ***** 2026-02-14 02:50:11.376808 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:50:11.376814 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:50:11.376819 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:50:11.376825 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:50:11.376830 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:50:11.376835 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:50:11.376841 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:50:11.376846 | orchestrator | 2026-02-14 02:50:11.376852 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-14 02:50:11.376857 | orchestrator | Saturday 14 February 2026 02:50:10 +0000 (0:00:00.560) 0:00:45.290 ***** 2026-02-14 02:50:11.376862 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:50:11.376868 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:50:11.376873 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:50:11.376879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:50:11.376884 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:50:11.376890 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:50:11.376895 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:50:11.376900 | orchestrator | 2026-02-14 02:50:11.376906 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:50:11.376912 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 02:50:11.376919 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376929 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376934 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376940 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376945 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376952 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 02:50:11.376958 | orchestrator | 2026-02-14 02:50:11.376964 | orchestrator | 2026-02-14 02:50:11.376970 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:50:11.376977 | orchestrator | Saturday 14 February 2026 02:50:11 +0000 (0:00:00.599) 0:00:45.889 ***** 2026-02-14 02:50:11.376985 | orchestrator | =============================================================================== 2026-02-14 02:50:11.376992 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.63s 2026-02-14 02:50:11.376998 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.58s 2026-02-14 02:50:11.377004 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.62s 2026-02-14 02:50:11.377010 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.28s 2026-02-14 02:50:11.377016 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2026-02-14 02:50:11.377022 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.08s 2026-02-14 02:50:11.377028 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2026-02-14 02:50:11.377034 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.73s 2026-02-14 02:50:11.377040 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2026-02-14 02:50:11.377046 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-02-14 02:50:11.377052 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.61s 2026-02-14 02:50:11.377059 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-02-14 02:50:11.377065 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2026-02-14 02:50:11.377071 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2026-02-14 02:50:11.377077 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.18s 2026-02-14 02:50:11.377084 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-02-14 02:50:11.377090 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-02-14 02:50:11.377096 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2026-02-14 02:50:11.377102 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.96s 2026-02-14 02:50:11.377109 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.94s 2026-02-14 02:50:11.591337 | orchestrator | + osism apply wireguard 2026-02-14 02:50:23.665134 | orchestrator | 2026-02-14 02:50:23 | INFO  | Task 5aab43b5-92cb-443e-8872-0f9304b3ba03 (wireguard) was prepared for execution. 2026-02-14 02:50:23.665274 | orchestrator | 2026-02-14 02:50:23 | INFO  | It takes a moment until task 5aab43b5-92cb-443e-8872-0f9304b3ba03 (wireguard) has been started and output is visible here. 2026-02-14 02:50:43.792939 | orchestrator | 2026-02-14 02:50:43.793082 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-14 02:50:43.793100 | orchestrator | 2026-02-14 02:50:43.793112 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-14 02:50:43.793124 | orchestrator | Saturday 14 February 2026 02:50:27 +0000 (0:00:00.219) 0:00:00.219 ***** 2026-02-14 02:50:43.793136 | orchestrator | ok: [testbed-manager] 2026-02-14 02:50:43.793147 | orchestrator | 2026-02-14 02:50:43.793158 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-14 02:50:43.793169 | orchestrator | Saturday 14 February 2026 02:50:29 +0000 (0:00:01.554) 0:00:01.774 ***** 2026-02-14 02:50:43.793179 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793195 | orchestrator | 2026-02-14 02:50:43.793207 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-14 02:50:43.793218 | orchestrator | Saturday 14 February 2026 02:50:36 +0000 (0:00:06.553) 0:00:08.327 ***** 2026-02-14 02:50:43.793228 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793239 | orchestrator | 2026-02-14 02:50:43.793250 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-14 02:50:43.793260 | orchestrator | Saturday 14 February 2026 02:50:36 +0000 (0:00:00.575) 0:00:08.903 ***** 2026-02-14 02:50:43.793271 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793281 | orchestrator | 2026-02-14 02:50:43.793292 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-14 02:50:43.793303 | orchestrator | Saturday 14 February 2026 02:50:37 +0000 (0:00:00.467) 0:00:09.371 ***** 2026-02-14 02:50:43.793313 | orchestrator | ok: [testbed-manager] 2026-02-14 02:50:43.793324 | orchestrator | 2026-02-14 02:50:43.793334 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-14 02:50:43.793410 | orchestrator | Saturday 14 February 2026 02:50:37 +0000 (0:00:00.714) 0:00:10.085 ***** 2026-02-14 02:50:43.793431 | orchestrator | ok: [testbed-manager] 2026-02-14 02:50:43.793448 | orchestrator | 2026-02-14 02:50:43.793467 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-14 02:50:43.793488 | orchestrator | Saturday 14 February 2026 02:50:38 +0000 (0:00:00.433) 0:00:10.519 ***** 2026-02-14 02:50:43.793507 | orchestrator | ok: [testbed-manager] 2026-02-14 02:50:43.793521 | orchestrator | 2026-02-14 02:50:43.793533 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-14 02:50:43.793545 | orchestrator | Saturday 14 February 2026 02:50:38 +0000 (0:00:00.424) 0:00:10.944 ***** 2026-02-14 02:50:43.793557 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793570 | orchestrator | 2026-02-14 02:50:43.793582 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-14 02:50:43.793593 | orchestrator | Saturday 14 February 2026 02:50:39 +0000 (0:00:01.168) 0:00:12.112 ***** 2026-02-14 02:50:43.793604 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-14 02:50:43.793614 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793625 | orchestrator | 2026-02-14 02:50:43.793636 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-14 02:50:43.793646 | orchestrator | Saturday 14 February 2026 02:50:40 +0000 (0:00:00.933) 0:00:13.046 ***** 2026-02-14 02:50:43.793657 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793669 | orchestrator | 2026-02-14 02:50:43.793715 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-14 02:50:43.793739 | orchestrator | Saturday 14 February 2026 02:50:42 +0000 (0:00:01.691) 0:00:14.737 ***** 2026-02-14 02:50:43.793750 | orchestrator | changed: [testbed-manager] 2026-02-14 02:50:43.793760 | orchestrator | 2026-02-14 02:50:43.793771 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:50:43.793783 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:50:43.793794 | orchestrator | 2026-02-14 02:50:43.793805 | orchestrator | 2026-02-14 02:50:43.793816 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:50:43.793837 | orchestrator | Saturday 14 February 2026 02:50:43 +0000 (0:00:00.990) 0:00:15.728 ***** 2026-02-14 02:50:43.793848 | orchestrator | =============================================================================== 2026-02-14 02:50:43.793859 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.55s 2026-02-14 02:50:43.793869 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2026-02-14 02:50:43.793880 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-02-14 02:50:43.793891 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-02-14 02:50:43.793901 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-02-14 02:50:43.793912 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-02-14 02:50:43.793923 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2026-02-14 02:50:43.793933 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2026-02-14 02:50:43.793944 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-02-14 02:50:43.793954 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-02-14 02:50:43.793965 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-02-14 02:50:44.103765 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-14 02:50:44.136580 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-14 02:50:44.136672 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-14 02:50:44.218634 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 182 0 --:--:-- --:--:-- --:--:-- 182 2026-02-14 02:50:44.231796 | orchestrator | + osism apply --environment custom workarounds 2026-02-14 02:50:46.174262 | orchestrator | 2026-02-14 02:50:46 | INFO  | Trying to run play workarounds in environment custom 2026-02-14 02:50:56.398121 | orchestrator | 2026-02-14 02:50:56 | INFO  | Task fbf84e54-10bd-4753-b1c5-b0f51bd363c9 (workarounds) was prepared for execution. 2026-02-14 02:50:56.398248 | orchestrator | 2026-02-14 02:50:56 | INFO  | It takes a moment until task fbf84e54-10bd-4753-b1c5-b0f51bd363c9 (workarounds) has been started and output is visible here. 2026-02-14 02:51:21.628830 | orchestrator | 2026-02-14 02:51:21.628978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 02:51:21.629004 | orchestrator | 2026-02-14 02:51:21.629027 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-14 02:51:21.629045 | orchestrator | Saturday 14 February 2026 02:51:00 +0000 (0:00:00.126) 0:00:00.126 ***** 2026-02-14 02:51:21.629064 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629084 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629105 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629124 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629140 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629151 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629162 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-14 02:51:21.629173 | orchestrator | 2026-02-14 02:51:21.629184 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-14 02:51:21.629195 | orchestrator | 2026-02-14 02:51:21.629206 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-14 02:51:21.629217 | orchestrator | Saturday 14 February 2026 02:51:01 +0000 (0:00:00.811) 0:00:00.937 ***** 2026-02-14 02:51:21.629228 | orchestrator | ok: [testbed-manager] 2026-02-14 02:51:21.629267 | orchestrator | 2026-02-14 02:51:21.629278 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-14 02:51:21.629289 | orchestrator | 2026-02-14 02:51:21.629300 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-14 02:51:21.629311 | orchestrator | Saturday 14 February 2026 02:51:03 +0000 (0:00:02.309) 0:00:03.247 ***** 2026-02-14 02:51:21.629322 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:51:21.629333 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:51:21.629344 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:51:21.629354 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:51:21.629365 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:51:21.629375 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:51:21.629386 | orchestrator | 2026-02-14 02:51:21.629397 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-14 02:51:21.629407 | orchestrator | 2026-02-14 02:51:21.629418 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-14 02:51:21.629444 | orchestrator | Saturday 14 February 2026 02:51:05 +0000 (0:00:01.777) 0:00:05.025 ***** 2026-02-14 02:51:21.629457 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629523 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629543 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629559 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629570 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629581 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-14 02:51:21.629592 | orchestrator | 2026-02-14 02:51:21.629603 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-14 02:51:21.629614 | orchestrator | Saturday 14 February 2026 02:51:06 +0000 (0:00:01.452) 0:00:06.477 ***** 2026-02-14 02:51:21.629625 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:51:21.629636 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:51:21.629647 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:51:21.629658 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:51:21.629669 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:51:21.629679 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:51:21.629690 | orchestrator | 2026-02-14 02:51:21.629701 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-14 02:51:21.629712 | orchestrator | Saturday 14 February 2026 02:51:10 +0000 (0:00:03.745) 0:00:10.222 ***** 2026-02-14 02:51:21.629722 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:51:21.629740 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:51:21.629766 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:51:21.629788 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:51:21.629806 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:51:21.629824 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:51:21.629841 | orchestrator | 2026-02-14 02:51:21.629860 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-14 02:51:21.629878 | orchestrator | 2026-02-14 02:51:21.629897 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-14 02:51:21.629916 | orchestrator | Saturday 14 February 2026 02:51:11 +0000 (0:00:00.773) 0:00:10.995 ***** 2026-02-14 02:51:21.629934 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:51:21.629951 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:51:21.629966 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:51:21.629977 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:51:21.629987 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:51:21.629998 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:51:21.630072 | orchestrator | changed: [testbed-manager] 2026-02-14 02:51:21.630086 | orchestrator | 2026-02-14 02:51:21.630098 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-14 02:51:21.630108 | orchestrator | Saturday 14 February 2026 02:51:13 +0000 (0:00:01.648) 0:00:12.644 ***** 2026-02-14 02:51:21.630119 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:51:21.630130 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:51:21.630140 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:51:21.630151 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:51:21.630162 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:51:21.630172 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:51:21.630205 | orchestrator | changed: [testbed-manager] 2026-02-14 02:51:21.630216 | orchestrator | 2026-02-14 02:51:21.630227 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-14 02:51:21.630238 | orchestrator | Saturday 14 February 2026 02:51:14 +0000 (0:00:01.645) 0:00:14.289 ***** 2026-02-14 02:51:21.630248 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:51:21.630259 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:51:21.630270 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:51:21.630280 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:51:21.630291 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:51:21.630301 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:51:21.630312 | orchestrator | ok: [testbed-manager] 2026-02-14 02:51:21.630323 | orchestrator | 2026-02-14 02:51:21.630333 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-14 02:51:21.630344 | orchestrator | Saturday 14 February 2026 02:51:16 +0000 (0:00:01.645) 0:00:15.935 ***** 2026-02-14 02:51:21.630355 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:51:21.630366 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:51:21.630376 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:51:21.630387 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:51:21.630397 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:51:21.630408 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:51:21.630418 | orchestrator | changed: [testbed-manager] 2026-02-14 02:51:21.630429 | orchestrator | 2026-02-14 02:51:21.630439 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-14 02:51:21.630450 | orchestrator | Saturday 14 February 2026 02:51:18 +0000 (0:00:01.967) 0:00:17.904 ***** 2026-02-14 02:51:21.630460 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:51:21.630501 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:51:21.630511 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:51:21.630522 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:51:21.630533 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:51:21.630543 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:51:21.630554 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:51:21.630564 | orchestrator | 2026-02-14 02:51:21.630575 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-14 02:51:21.630586 | orchestrator | 2026-02-14 02:51:21.630597 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-14 02:51:21.630608 | orchestrator | Saturday 14 February 2026 02:51:19 +0000 (0:00:00.704) 0:00:18.608 ***** 2026-02-14 02:51:21.630618 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:51:21.630629 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:51:21.630639 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:51:21.630650 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:51:21.630660 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:51:21.630679 | orchestrator | ok: [testbed-manager] 2026-02-14 02:51:21.630690 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:51:21.630701 | orchestrator | 2026-02-14 02:51:21.630712 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:51:21.630724 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:51:21.630735 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630753 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630764 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630775 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630786 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630797 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:21.630808 | orchestrator | 2026-02-14 02:51:21.630819 | orchestrator | 2026-02-14 02:51:21.630830 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:51:21.630841 | orchestrator | Saturday 14 February 2026 02:51:21 +0000 (0:00:02.595) 0:00:21.204 ***** 2026-02-14 02:51:21.630851 | orchestrator | =============================================================================== 2026-02-14 02:51:21.630862 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.75s 2026-02-14 02:51:21.630873 | orchestrator | Install python3-docker -------------------------------------------------- 2.60s 2026-02-14 02:51:21.630884 | orchestrator | Apply netplan configuration --------------------------------------------- 2.31s 2026-02-14 02:51:21.630894 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.97s 2026-02-14 02:51:21.630911 | orchestrator | Apply netplan configuration --------------------------------------------- 1.78s 2026-02-14 02:51:21.630929 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2026-02-14 02:51:21.630947 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.65s 2026-02-14 02:51:21.630964 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2026-02-14 02:51:21.630983 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2026-02-14 02:51:21.631001 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2026-02-14 02:51:21.631018 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2026-02-14 02:51:21.631045 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2026-02-14 02:51:22.301051 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-14 02:51:34.341569 | orchestrator | 2026-02-14 02:51:34 | INFO  | Task 879f324a-8aba-4244-9a0e-9890e6eb0883 (reboot) was prepared for execution. 2026-02-14 02:51:34.341684 | orchestrator | 2026-02-14 02:51:34 | INFO  | It takes a moment until task 879f324a-8aba-4244-9a0e-9890e6eb0883 (reboot) has been started and output is visible here. 2026-02-14 02:51:44.670227 | orchestrator | 2026-02-14 02:51:44.670344 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.670361 | orchestrator | 2026-02-14 02:51:44.670373 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.670386 | orchestrator | Saturday 14 February 2026 02:51:38 +0000 (0:00:00.205) 0:00:00.205 ***** 2026-02-14 02:51:44.670397 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:51:44.670409 | orchestrator | 2026-02-14 02:51:44.670420 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.670431 | orchestrator | Saturday 14 February 2026 02:51:38 +0000 (0:00:00.114) 0:00:00.319 ***** 2026-02-14 02:51:44.670442 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:51:44.670453 | orchestrator | 2026-02-14 02:51:44.670464 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.670498 | orchestrator | Saturday 14 February 2026 02:51:39 +0000 (0:00:00.906) 0:00:01.225 ***** 2026-02-14 02:51:44.670509 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:51:44.670520 | orchestrator | 2026-02-14 02:51:44.670575 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.670586 | orchestrator | 2026-02-14 02:51:44.670597 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.670608 | orchestrator | Saturday 14 February 2026 02:51:39 +0000 (0:00:00.110) 0:00:01.336 ***** 2026-02-14 02:51:44.670618 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:51:44.670629 | orchestrator | 2026-02-14 02:51:44.670640 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.670651 | orchestrator | Saturday 14 February 2026 02:51:39 +0000 (0:00:00.115) 0:00:01.452 ***** 2026-02-14 02:51:44.670662 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:51:44.670672 | orchestrator | 2026-02-14 02:51:44.670683 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.670710 | orchestrator | Saturday 14 February 2026 02:51:40 +0000 (0:00:00.728) 0:00:02.181 ***** 2026-02-14 02:51:44.670721 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:51:44.670732 | orchestrator | 2026-02-14 02:51:44.670743 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.670754 | orchestrator | 2026-02-14 02:51:44.670765 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.670778 | orchestrator | Saturday 14 February 2026 02:51:40 +0000 (0:00:00.120) 0:00:02.302 ***** 2026-02-14 02:51:44.670790 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:51:44.670802 | orchestrator | 2026-02-14 02:51:44.670814 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.670826 | orchestrator | Saturday 14 February 2026 02:51:40 +0000 (0:00:00.202) 0:00:02.504 ***** 2026-02-14 02:51:44.670838 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:51:44.670851 | orchestrator | 2026-02-14 02:51:44.670863 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.670875 | orchestrator | Saturday 14 February 2026 02:51:41 +0000 (0:00:00.662) 0:00:03.167 ***** 2026-02-14 02:51:44.670888 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:51:44.670900 | orchestrator | 2026-02-14 02:51:44.670912 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.670924 | orchestrator | 2026-02-14 02:51:44.670937 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.670950 | orchestrator | Saturday 14 February 2026 02:51:41 +0000 (0:00:00.108) 0:00:03.275 ***** 2026-02-14 02:51:44.670962 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:51:44.670974 | orchestrator | 2026-02-14 02:51:44.670986 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.670999 | orchestrator | Saturday 14 February 2026 02:51:41 +0000 (0:00:00.103) 0:00:03.379 ***** 2026-02-14 02:51:44.671011 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:51:44.671023 | orchestrator | 2026-02-14 02:51:44.671035 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.671047 | orchestrator | Saturday 14 February 2026 02:51:42 +0000 (0:00:00.745) 0:00:04.124 ***** 2026-02-14 02:51:44.671060 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:51:44.671072 | orchestrator | 2026-02-14 02:51:44.671084 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.671096 | orchestrator | 2026-02-14 02:51:44.671108 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.671121 | orchestrator | Saturday 14 February 2026 02:51:42 +0000 (0:00:00.127) 0:00:04.252 ***** 2026-02-14 02:51:44.671134 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:51:44.671147 | orchestrator | 2026-02-14 02:51:44.671158 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.671176 | orchestrator | Saturday 14 February 2026 02:51:42 +0000 (0:00:00.106) 0:00:04.358 ***** 2026-02-14 02:51:44.671187 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:51:44.671198 | orchestrator | 2026-02-14 02:51:44.671208 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.671219 | orchestrator | Saturday 14 February 2026 02:51:43 +0000 (0:00:00.658) 0:00:05.017 ***** 2026-02-14 02:51:44.671230 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:51:44.671241 | orchestrator | 2026-02-14 02:51:44.671252 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-14 02:51:44.671263 | orchestrator | 2026-02-14 02:51:44.671274 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-14 02:51:44.671285 | orchestrator | Saturday 14 February 2026 02:51:43 +0000 (0:00:00.108) 0:00:05.125 ***** 2026-02-14 02:51:44.671295 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:51:44.671306 | orchestrator | 2026-02-14 02:51:44.671317 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-14 02:51:44.671328 | orchestrator | Saturday 14 February 2026 02:51:43 +0000 (0:00:00.107) 0:00:05.233 ***** 2026-02-14 02:51:44.671339 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:51:44.671349 | orchestrator | 2026-02-14 02:51:44.671360 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-14 02:51:44.671371 | orchestrator | Saturday 14 February 2026 02:51:44 +0000 (0:00:00.704) 0:00:05.938 ***** 2026-02-14 02:51:44.671399 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:51:44.671410 | orchestrator | 2026-02-14 02:51:44.671421 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:51:44.671433 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671446 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671457 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671467 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671478 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671489 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:51:44.671500 | orchestrator | 2026-02-14 02:51:44.671511 | orchestrator | 2026-02-14 02:51:44.671522 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:51:44.671550 | orchestrator | Saturday 14 February 2026 02:51:44 +0000 (0:00:00.040) 0:00:05.979 ***** 2026-02-14 02:51:44.671567 | orchestrator | =============================================================================== 2026-02-14 02:51:44.671578 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.41s 2026-02-14 02:51:44.671589 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2026-02-14 02:51:44.671600 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-02-14 02:51:44.962802 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-14 02:51:57.129342 | orchestrator | 2026-02-14 02:51:57 | INFO  | Task d94da481-4845-4c1f-86b6-a509118274c1 (wait-for-connection) was prepared for execution. 2026-02-14 02:51:57.129441 | orchestrator | 2026-02-14 02:51:57 | INFO  | It takes a moment until task d94da481-4845-4c1f-86b6-a509118274c1 (wait-for-connection) has been started and output is visible here. 2026-02-14 02:52:13.235758 | orchestrator | 2026-02-14 02:52:13.235879 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-14 02:52:13.235895 | orchestrator | 2026-02-14 02:52:13.235907 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-14 02:52:13.235919 | orchestrator | Saturday 14 February 2026 02:52:01 +0000 (0:00:00.245) 0:00:00.245 ***** 2026-02-14 02:52:13.235930 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:52:13.235942 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:52:13.235953 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:52:13.235963 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:52:13.235974 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:52:13.235985 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:52:13.235996 | orchestrator | 2026-02-14 02:52:13.236007 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:52:13.236018 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236031 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236042 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236053 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236064 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236074 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:13.236086 | orchestrator | 2026-02-14 02:52:13.236097 | orchestrator | 2026-02-14 02:52:13.236108 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:52:13.236118 | orchestrator | Saturday 14 February 2026 02:52:12 +0000 (0:00:11.529) 0:00:11.774 ***** 2026-02-14 02:52:13.236129 | orchestrator | =============================================================================== 2026-02-14 02:52:13.236140 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.53s 2026-02-14 02:52:13.560562 | orchestrator | + osism apply hddtemp 2026-02-14 02:52:25.569275 | orchestrator | 2026-02-14 02:52:25 | INFO  | Task a01a5b43-66d9-497d-a71d-910eb561f35a (hddtemp) was prepared for execution. 2026-02-14 02:52:25.569401 | orchestrator | 2026-02-14 02:52:25 | INFO  | It takes a moment until task a01a5b43-66d9-497d-a71d-910eb561f35a (hddtemp) has been started and output is visible here. 2026-02-14 02:52:52.644564 | orchestrator | 2026-02-14 02:52:52.644654 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-14 02:52:52.644669 | orchestrator | 2026-02-14 02:52:52.644680 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-14 02:52:52.644693 | orchestrator | Saturday 14 February 2026 02:52:29 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-02-14 02:52:52.644765 | orchestrator | ok: [testbed-manager] 2026-02-14 02:52:52.644786 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:52:52.644803 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:52:52.644822 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:52:52.644833 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:52:52.644846 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:52:52.644862 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:52:52.644879 | orchestrator | 2026-02-14 02:52:52.644896 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-14 02:52:52.644913 | orchestrator | Saturday 14 February 2026 02:52:30 +0000 (0:00:00.639) 0:00:00.900 ***** 2026-02-14 02:52:52.644931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:52:52.644976 | orchestrator | 2026-02-14 02:52:52.644994 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-14 02:52:52.645011 | orchestrator | Saturday 14 February 2026 02:52:31 +0000 (0:00:01.021) 0:00:01.921 ***** 2026-02-14 02:52:52.645026 | orchestrator | ok: [testbed-manager] 2026-02-14 02:52:52.645040 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:52:52.645056 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:52:52.645071 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:52:52.645086 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:52:52.645101 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:52:52.645118 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:52:52.645136 | orchestrator | 2026-02-14 02:52:52.645153 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-14 02:52:52.645184 | orchestrator | Saturday 14 February 2026 02:52:33 +0000 (0:00:01.772) 0:00:03.694 ***** 2026-02-14 02:52:52.645202 | orchestrator | changed: [testbed-manager] 2026-02-14 02:52:52.645219 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:52:52.645234 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:52:52.645250 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:52:52.645266 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:52:52.645283 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:52:52.645300 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:52:52.645317 | orchestrator | 2026-02-14 02:52:52.645334 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-14 02:52:52.645352 | orchestrator | Saturday 14 February 2026 02:52:34 +0000 (0:00:01.016) 0:00:04.711 ***** 2026-02-14 02:52:52.645369 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:52:52.645385 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:52:52.645402 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:52:52.645419 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:52:52.645437 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:52:52.645452 | orchestrator | ok: [testbed-manager] 2026-02-14 02:52:52.645469 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:52:52.645483 | orchestrator | 2026-02-14 02:52:52.645493 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-14 02:52:52.645503 | orchestrator | Saturday 14 February 2026 02:52:35 +0000 (0:00:01.055) 0:00:05.766 ***** 2026-02-14 02:52:52.645512 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:52:52.645521 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:52:52.645531 | orchestrator | changed: [testbed-manager] 2026-02-14 02:52:52.645540 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:52:52.645550 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:52:52.645559 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:52:52.645568 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:52:52.645577 | orchestrator | 2026-02-14 02:52:52.645587 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-14 02:52:52.645596 | orchestrator | Saturday 14 February 2026 02:52:36 +0000 (0:00:00.811) 0:00:06.578 ***** 2026-02-14 02:52:52.645606 | orchestrator | changed: [testbed-manager] 2026-02-14 02:52:52.645615 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:52:52.645625 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:52:52.645634 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:52:52.645643 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:52:52.645653 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:52:52.645662 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:52:52.645671 | orchestrator | 2026-02-14 02:52:52.645681 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-14 02:52:52.645690 | orchestrator | Saturday 14 February 2026 02:52:48 +0000 (0:00:12.366) 0:00:18.944 ***** 2026-02-14 02:52:52.645700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:52:52.645746 | orchestrator | 2026-02-14 02:52:52.645757 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-14 02:52:52.645773 | orchestrator | Saturday 14 February 2026 02:52:49 +0000 (0:00:01.242) 0:00:20.187 ***** 2026-02-14 02:52:52.645789 | orchestrator | changed: [testbed-manager] 2026-02-14 02:52:52.645805 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:52:52.645823 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:52:52.645839 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:52:52.645855 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:52:52.645865 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:52:52.645874 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:52:52.645884 | orchestrator | 2026-02-14 02:52:52.645893 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:52:52.645903 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 02:52:52.645932 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645942 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645952 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645961 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645971 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645980 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:52:52.645990 | orchestrator | 2026-02-14 02:52:52.645999 | orchestrator | 2026-02-14 02:52:52.646009 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:52:52.646070 | orchestrator | Saturday 14 February 2026 02:52:52 +0000 (0:00:02.657) 0:00:22.845 ***** 2026-02-14 02:52:52.646081 | orchestrator | =============================================================================== 2026-02-14 02:52:52.646090 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.37s 2026-02-14 02:52:52.646100 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.66s 2026-02-14 02:52:52.646109 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.77s 2026-02-14 02:52:52.646125 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2026-02-14 02:52:52.646135 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.06s 2026-02-14 02:52:52.646145 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2026-02-14 02:52:52.646164 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2026-02-14 02:52:52.646180 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2026-02-14 02:52:52.646197 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.64s 2026-02-14 02:52:52.838203 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-14 02:52:52.878607 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 02:52:52.878693 | orchestrator | + sudo systemctl restart manager.service 2026-02-14 02:53:06.547950 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 02:53:06.548070 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-14 02:53:06.548089 | orchestrator | + local max_attempts=60 2026-02-14 02:53:06.548103 | orchestrator | + local name=ceph-ansible 2026-02-14 02:53:06.548114 | orchestrator | + local attempt_num=1 2026-02-14 02:53:06.548126 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:06.588027 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:06.588124 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:06.588140 | orchestrator | + sleep 5 2026-02-14 02:53:11.594069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:11.626546 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:11.626626 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:11.626636 | orchestrator | + sleep 5 2026-02-14 02:53:16.630231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:16.666111 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:16.666204 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:16.666220 | orchestrator | + sleep 5 2026-02-14 02:53:21.670852 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:21.712852 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:21.712949 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:21.712963 | orchestrator | + sleep 5 2026-02-14 02:53:26.717404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:26.752298 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:26.752410 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:26.752425 | orchestrator | + sleep 5 2026-02-14 02:53:31.758343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:31.795568 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:31.795638 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:31.795644 | orchestrator | + sleep 5 2026-02-14 02:53:36.801300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:36.842918 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:36.843037 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:36.843057 | orchestrator | + sleep 5 2026-02-14 02:53:41.851232 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:41.904428 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:41.904525 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:41.904540 | orchestrator | + sleep 5 2026-02-14 02:53:46.908331 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:46.945045 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:46.945155 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:46.945178 | orchestrator | + sleep 5 2026-02-14 02:53:51.947967 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:51.984216 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:51.984294 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:51.984303 | orchestrator | + sleep 5 2026-02-14 02:53:56.988336 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:53:57.031033 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:53:57.031108 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:53:57.031116 | orchestrator | + sleep 5 2026-02-14 02:54:02.037035 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:54:02.074076 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:54:02.074176 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:54:02.074191 | orchestrator | + sleep 5 2026-02-14 02:54:07.079646 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:54:07.123141 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-14 02:54:07.123257 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-14 02:54:07.123281 | orchestrator | + sleep 5 2026-02-14 02:54:12.128425 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-14 02:54:12.165579 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:54:12.165687 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-14 02:54:12.165705 | orchestrator | + local max_attempts=60 2026-02-14 02:54:12.165717 | orchestrator | + local name=kolla-ansible 2026-02-14 02:54:12.165729 | orchestrator | + local attempt_num=1 2026-02-14 02:54:12.166539 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-14 02:54:12.209252 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:54:12.209343 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-14 02:54:12.209476 | orchestrator | + local max_attempts=60 2026-02-14 02:54:12.209492 | orchestrator | + local name=osism-ansible 2026-02-14 02:54:12.209503 | orchestrator | + local attempt_num=1 2026-02-14 02:54:12.209525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-14 02:54:12.247319 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 02:54:12.247492 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-14 02:54:12.247521 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-14 02:54:12.406572 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-14 02:54:12.572241 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-14 02:54:12.752269 | orchestrator | ARA in osism-ansible already disabled. 2026-02-14 02:54:12.903307 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-14 02:54:12.903546 | orchestrator | + osism apply gather-facts 2026-02-14 02:54:25.080339 | orchestrator | 2026-02-14 02:54:25 | INFO  | Task 9f5f8ba8-b072-428c-b221-dea3d9445403 (gather-facts) was prepared for execution. 2026-02-14 02:54:25.080433 | orchestrator | 2026-02-14 02:54:25 | INFO  | It takes a moment until task 9f5f8ba8-b072-428c-b221-dea3d9445403 (gather-facts) has been started and output is visible here. 2026-02-14 02:54:39.232744 | orchestrator | 2026-02-14 02:54:39.232841 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 02:54:39.232854 | orchestrator | 2026-02-14 02:54:39.232864 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 02:54:39.232873 | orchestrator | Saturday 14 February 2026 02:54:29 +0000 (0:00:00.220) 0:00:00.220 ***** 2026-02-14 02:54:39.232882 | orchestrator | ok: [testbed-manager] 2026-02-14 02:54:39.232891 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:54:39.232899 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:54:39.232907 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:54:39.232914 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:54:39.232922 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:54:39.232930 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:54:39.232938 | orchestrator | 2026-02-14 02:54:39.232945 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-14 02:54:39.232953 | orchestrator | 2026-02-14 02:54:39.232961 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-14 02:54:39.232969 | orchestrator | Saturday 14 February 2026 02:54:38 +0000 (0:00:09.055) 0:00:09.276 ***** 2026-02-14 02:54:39.232977 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:54:39.233027 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:54:39.233036 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:54:39.233044 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:54:39.233051 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:54:39.233059 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:54:39.233067 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:54:39.233075 | orchestrator | 2026-02-14 02:54:39.233082 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:54:39.233091 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233100 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233108 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233116 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233134 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233142 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233174 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 02:54:39.233183 | orchestrator | 2026-02-14 02:54:39.233191 | orchestrator | 2026-02-14 02:54:39.233198 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:54:39.233206 | orchestrator | Saturday 14 February 2026 02:54:38 +0000 (0:00:00.597) 0:00:09.873 ***** 2026-02-14 02:54:39.233214 | orchestrator | =============================================================================== 2026-02-14 02:54:39.233222 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.06s 2026-02-14 02:54:39.233230 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-02-14 02:54:39.569613 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-14 02:54:39.586559 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-14 02:54:39.605331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-14 02:54:39.619606 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-14 02:54:39.636262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-14 02:54:39.650177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-14 02:54:39.667516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-14 02:54:39.679404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-14 02:54:39.692757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-14 02:54:39.705127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-14 02:54:39.723234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-14 02:54:39.741521 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-14 02:54:39.756882 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-14 02:54:39.778975 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-14 02:54:39.799430 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-14 02:54:39.815236 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-14 02:54:39.832319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-14 02:54:39.845487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-14 02:54:39.861890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-14 02:54:39.879514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-14 02:54:39.893890 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-14 02:54:39.912322 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-14 02:54:39.931705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-14 02:54:39.946289 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-14 02:54:40.150203 | orchestrator | ok: Runtime: 0:25:08.706915 2026-02-14 02:54:40.266897 | 2026-02-14 02:54:40.267050 | TASK [Deploy services] 2026-02-14 02:54:40.976566 | orchestrator | 2026-02-14 02:54:40.976759 | orchestrator | # DEPLOY SERVICES 2026-02-14 02:54:40.976787 | orchestrator | 2026-02-14 02:54:40.976802 | orchestrator | + set -e 2026-02-14 02:54:40.976816 | orchestrator | + echo 2026-02-14 02:54:40.976831 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-14 02:54:40.976845 | orchestrator | + echo 2026-02-14 02:54:40.976890 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 02:54:40.976912 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 02:54:40.976927 | orchestrator | ++ INTERACTIVE=false 2026-02-14 02:54:40.976939 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 02:54:40.976960 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 02:54:40.976971 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 02:54:40.976987 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 02:54:40.977028 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 02:54:40.977047 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 02:54:40.977058 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 02:54:40.977073 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 02:54:40.977085 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 02:54:40.977100 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 02:54:40.977111 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 02:54:40.977122 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 02:54:40.977134 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 02:54:40.977145 | orchestrator | ++ export ARA=false 2026-02-14 02:54:40.977156 | orchestrator | ++ ARA=false 2026-02-14 02:54:40.977167 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 02:54:40.977178 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 02:54:40.977190 | orchestrator | ++ export TEMPEST=false 2026-02-14 02:54:40.977201 | orchestrator | ++ TEMPEST=false 2026-02-14 02:54:40.977211 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 02:54:40.977222 | orchestrator | ++ IS_ZUUL=true 2026-02-14 02:54:40.977233 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:54:40.977245 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:54:40.977256 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 02:54:40.977267 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 02:54:40.977278 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 02:54:40.977349 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 02:54:40.977369 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 02:54:40.977388 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 02:54:40.977407 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 02:54:40.977433 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 02:54:40.977452 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-14 02:54:40.983058 | orchestrator | + set -e 2026-02-14 02:54:40.983181 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 02:54:40.983195 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 02:54:40.983207 | orchestrator | ++ INTERACTIVE=false 2026-02-14 02:54:40.983218 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 02:54:40.983229 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 02:54:40.983239 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 02:54:40.983250 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 02:54:40.983261 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 02:54:40.983272 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 02:54:40.983283 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 02:54:40.983293 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 02:54:40.983304 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 02:54:40.983315 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 02:54:40.983326 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 02:54:40.983337 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 02:54:40.983347 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 02:54:40.983359 | orchestrator | ++ export ARA=false 2026-02-14 02:54:40.983370 | orchestrator | ++ ARA=false 2026-02-14 02:54:40.983380 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 02:54:40.983391 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 02:54:40.983410 | orchestrator | 2026-02-14 02:54:40.983424 | orchestrator | # PULL IMAGES 2026-02-14 02:54:40.983435 | orchestrator | 2026-02-14 02:54:40.983446 | orchestrator | ++ export TEMPEST=false 2026-02-14 02:54:40.983458 | orchestrator | ++ TEMPEST=false 2026-02-14 02:54:40.983468 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 02:54:40.983479 | orchestrator | ++ IS_ZUUL=true 2026-02-14 02:54:40.983490 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:54:40.983501 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:54:40.983512 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 02:54:40.983522 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 02:54:40.983533 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 02:54:40.983544 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 02:54:40.983577 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 02:54:40.983589 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 02:54:40.983600 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 02:54:40.983610 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 02:54:40.983621 | orchestrator | + echo 2026-02-14 02:54:40.983632 | orchestrator | + echo '# PULL IMAGES' 2026-02-14 02:54:40.983643 | orchestrator | + echo 2026-02-14 02:54:40.983932 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-14 02:54:41.031409 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 02:54:41.031510 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-14 02:54:42.965186 | orchestrator | 2026-02-14 02:54:42 | INFO  | Trying to run play pull-images in environment custom 2026-02-14 02:54:53.119433 | orchestrator | 2026-02-14 02:54:53 | INFO  | Task 956bb394-e72e-4e91-954e-1e7b3a0a1190 (pull-images) was prepared for execution. 2026-02-14 02:54:53.119590 | orchestrator | 2026-02-14 02:54:53 | INFO  | Task 956bb394-e72e-4e91-954e-1e7b3a0a1190 is running in background. No more output. Check ARA for logs. 2026-02-14 02:54:53.420789 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-14 02:55:05.473478 | orchestrator | 2026-02-14 02:55:05 | INFO  | Task 5b652607-37e0-4785-a923-0731d3cc63cb (cgit) was prepared for execution. 2026-02-14 02:55:05.473596 | orchestrator | 2026-02-14 02:55:05 | INFO  | Task 5b652607-37e0-4785-a923-0731d3cc63cb is running in background. No more output. Check ARA for logs. 2026-02-14 02:55:17.931190 | orchestrator | 2026-02-14 02:55:17 | INFO  | Task 4a3b266d-708e-48f5-92e7-b1f55fcd7582 (dotfiles) was prepared for execution. 2026-02-14 02:55:17.931308 | orchestrator | 2026-02-14 02:55:17 | INFO  | Task 4a3b266d-708e-48f5-92e7-b1f55fcd7582 is running in background. No more output. Check ARA for logs. 2026-02-14 02:55:30.470478 | orchestrator | 2026-02-14 02:55:30 | INFO  | Task 56004325-136f-438a-8608-6694b8b1bc5e (homer) was prepared for execution. 2026-02-14 02:55:30.470579 | orchestrator | 2026-02-14 02:55:30 | INFO  | Task 56004325-136f-438a-8608-6694b8b1bc5e is running in background. No more output. Check ARA for logs. 2026-02-14 02:55:43.256716 | orchestrator | 2026-02-14 02:55:43 | INFO  | Task 9ea037a0-9112-46b4-a3ec-4756c9b09278 (phpmyadmin) was prepared for execution. 2026-02-14 02:55:43.256837 | orchestrator | 2026-02-14 02:55:43 | INFO  | Task 9ea037a0-9112-46b4-a3ec-4756c9b09278 is running in background. No more output. Check ARA for logs. 2026-02-14 02:55:55.663177 | orchestrator | 2026-02-14 02:55:55 | INFO  | Task e7653781-157e-4159-8910-81ee01e26e53 (sosreport) was prepared for execution. 2026-02-14 02:55:55.663327 | orchestrator | 2026-02-14 02:55:55 | INFO  | Task e7653781-157e-4159-8910-81ee01e26e53 is running in background. No more output. Check ARA for logs. 2026-02-14 02:55:55.988132 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-14 02:55:55.999560 | orchestrator | + set -e 2026-02-14 02:55:55.999630 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 02:55:55.999645 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 02:55:55.999659 | orchestrator | ++ INTERACTIVE=false 2026-02-14 02:55:55.999674 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 02:55:55.999685 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 02:55:55.999696 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 02:55:55.999708 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 02:55:55.999719 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 02:55:55.999729 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 02:55:55.999740 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 02:55:55.999752 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 02:55:55.999763 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 02:55:55.999774 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 02:55:55.999785 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 02:55:55.999797 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 02:55:55.999808 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 02:55:55.999819 | orchestrator | ++ export ARA=false 2026-02-14 02:55:55.999830 | orchestrator | ++ ARA=false 2026-02-14 02:55:55.999841 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 02:55:55.999885 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 02:55:55.999896 | orchestrator | ++ export TEMPEST=false 2026-02-14 02:55:55.999908 | orchestrator | ++ TEMPEST=false 2026-02-14 02:55:55.999919 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 02:55:55.999930 | orchestrator | ++ IS_ZUUL=true 2026-02-14 02:55:55.999956 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:55:55.999973 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 02:55:55.999985 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 02:55:55.999996 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 02:55:56.000006 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 02:55:56.000018 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 02:55:56.000039 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 02:55:56.000050 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 02:55:56.000062 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 02:55:56.000073 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 02:55:56.001515 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-14 02:55:56.070461 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 02:55:56.070548 | orchestrator | + osism apply frr 2026-02-14 02:56:08.399998 | orchestrator | 2026-02-14 02:56:08 | INFO  | Task 2665210b-71ef-4f9d-8bce-f3fe496987c3 (frr) was prepared for execution. 2026-02-14 02:56:08.400120 | orchestrator | 2026-02-14 02:56:08 | INFO  | It takes a moment until task 2665210b-71ef-4f9d-8bce-f3fe496987c3 (frr) has been started and output is visible here. 2026-02-14 02:56:36.047486 | orchestrator | 2026-02-14 02:56:36.047588 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-14 02:56:36.047605 | orchestrator | 2026-02-14 02:56:36.047617 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-14 02:56:36.047634 | orchestrator | Saturday 14 February 2026 02:56:13 +0000 (0:00:00.233) 0:00:00.233 ***** 2026-02-14 02:56:36.047645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 02:56:36.047657 | orchestrator | 2026-02-14 02:56:36.047669 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-14 02:56:36.047682 | orchestrator | Saturday 14 February 2026 02:56:13 +0000 (0:00:00.222) 0:00:00.455 ***** 2026-02-14 02:56:36.047705 | orchestrator | changed: [testbed-manager] 2026-02-14 02:56:36.047734 | orchestrator | 2026-02-14 02:56:36.047753 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-14 02:56:36.047773 | orchestrator | Saturday 14 February 2026 02:56:15 +0000 (0:00:01.356) 0:00:01.813 ***** 2026-02-14 02:56:36.047793 | orchestrator | changed: [testbed-manager] 2026-02-14 02:56:36.047812 | orchestrator | 2026-02-14 02:56:36.047831 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-14 02:56:36.047843 | orchestrator | Saturday 14 February 2026 02:56:26 +0000 (0:00:10.981) 0:00:12.795 ***** 2026-02-14 02:56:36.047854 | orchestrator | ok: [testbed-manager] 2026-02-14 02:56:36.047866 | orchestrator | 2026-02-14 02:56:36.047876 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-14 02:56:36.047887 | orchestrator | Saturday 14 February 2026 02:56:27 +0000 (0:00:00.953) 0:00:13.748 ***** 2026-02-14 02:56:36.047898 | orchestrator | changed: [testbed-manager] 2026-02-14 02:56:36.047909 | orchestrator | 2026-02-14 02:56:36.047919 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-14 02:56:36.047930 | orchestrator | Saturday 14 February 2026 02:56:27 +0000 (0:00:00.785) 0:00:14.534 ***** 2026-02-14 02:56:36.047941 | orchestrator | ok: [testbed-manager] 2026-02-14 02:56:36.047952 | orchestrator | 2026-02-14 02:56:36.047963 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-14 02:56:36.047974 | orchestrator | Saturday 14 February 2026 02:56:29 +0000 (0:00:01.109) 0:00:15.643 ***** 2026-02-14 02:56:36.047985 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:56:36.047996 | orchestrator | 2026-02-14 02:56:36.048007 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-14 02:56:36.048018 | orchestrator | Saturday 14 February 2026 02:56:29 +0000 (0:00:00.107) 0:00:15.751 ***** 2026-02-14 02:56:36.048051 | orchestrator | skipping: [testbed-manager] 2026-02-14 02:56:36.048065 | orchestrator | 2026-02-14 02:56:36.048077 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-14 02:56:36.048090 | orchestrator | Saturday 14 February 2026 02:56:29 +0000 (0:00:00.123) 0:00:15.874 ***** 2026-02-14 02:56:36.048103 | orchestrator | changed: [testbed-manager] 2026-02-14 02:56:36.048116 | orchestrator | 2026-02-14 02:56:36.048128 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-14 02:56:36.048142 | orchestrator | Saturday 14 February 2026 02:56:30 +0000 (0:00:00.882) 0:00:16.756 ***** 2026-02-14 02:56:36.048162 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-14 02:56:36.048180 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-14 02:56:36.048199 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-14 02:56:36.048219 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-14 02:56:36.048240 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-14 02:56:36.048260 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-14 02:56:36.048309 | orchestrator | 2026-02-14 02:56:36.048324 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-14 02:56:36.048338 | orchestrator | Saturday 14 February 2026 02:56:33 +0000 (0:00:02.957) 0:00:19.714 ***** 2026-02-14 02:56:36.048348 | orchestrator | ok: [testbed-manager] 2026-02-14 02:56:36.048359 | orchestrator | 2026-02-14 02:56:36.048370 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-14 02:56:36.048381 | orchestrator | Saturday 14 February 2026 02:56:34 +0000 (0:00:01.385) 0:00:21.099 ***** 2026-02-14 02:56:36.048391 | orchestrator | changed: [testbed-manager] 2026-02-14 02:56:36.048402 | orchestrator | 2026-02-14 02:56:36.048413 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 02:56:36.048424 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 02:56:36.048435 | orchestrator | 2026-02-14 02:56:36.048446 | orchestrator | 2026-02-14 02:56:36.048464 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 02:56:36.048476 | orchestrator | Saturday 14 February 2026 02:56:35 +0000 (0:00:01.339) 0:00:22.438 ***** 2026-02-14 02:56:36.048487 | orchestrator | =============================================================================== 2026-02-14 02:56:36.048497 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.98s 2026-02-14 02:56:36.048508 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.96s 2026-02-14 02:56:36.048519 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2026-02-14 02:56:36.048530 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.36s 2026-02-14 02:56:36.048540 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-02-14 02:56:36.048569 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2026-02-14 02:56:36.048581 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.95s 2026-02-14 02:56:36.048592 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.88s 2026-02-14 02:56:36.048602 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.79s 2026-02-14 02:56:36.048613 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-02-14 02:56:36.048624 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.12s 2026-02-14 02:56:36.048635 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.11s 2026-02-14 02:56:36.283051 | orchestrator | + osism apply kubernetes 2026-02-14 02:56:38.384138 | orchestrator | 2026-02-14 02:56:38 | INFO  | Task 1c966e45-2a39-4abf-b0b8-6738c65b710e (kubernetes) was prepared for execution. 2026-02-14 02:56:38.384234 | orchestrator | 2026-02-14 02:56:38 | INFO  | It takes a moment until task 1c966e45-2a39-4abf-b0b8-6738c65b710e (kubernetes) has been started and output is visible here. 2026-02-14 02:57:02.204743 | orchestrator | 2026-02-14 02:57:02.204855 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-14 02:57:02.204872 | orchestrator | 2026-02-14 02:57:02.204884 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-14 02:57:02.204896 | orchestrator | Saturday 14 February 2026 02:56:43 +0000 (0:00:00.163) 0:00:00.163 ***** 2026-02-14 02:57:02.204907 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:57:02.204919 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:57:02.204930 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:57:02.204942 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:57:02.204953 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:57:02.204964 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:57:02.204975 | orchestrator | 2026-02-14 02:57:02.204986 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-14 02:57:02.204997 | orchestrator | Saturday 14 February 2026 02:56:43 +0000 (0:00:00.703) 0:00:00.867 ***** 2026-02-14 02:57:02.205008 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.205020 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.205031 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.205042 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.205053 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.205063 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.205075 | orchestrator | 2026-02-14 02:57:02.205086 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-14 02:57:02.205099 | orchestrator | Saturday 14 February 2026 02:56:44 +0000 (0:00:00.596) 0:00:01.463 ***** 2026-02-14 02:57:02.205110 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.205121 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.205132 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.205143 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.205154 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.205165 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.205176 | orchestrator | 2026-02-14 02:57:02.205187 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-14 02:57:02.205198 | orchestrator | Saturday 14 February 2026 02:56:45 +0000 (0:00:00.695) 0:00:02.159 ***** 2026-02-14 02:57:02.205209 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:57:02.205220 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:57:02.205230 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:57:02.205246 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:57:02.205258 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:57:02.205271 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:57:02.205284 | orchestrator | 2026-02-14 02:57:02.205297 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-14 02:57:02.205311 | orchestrator | Saturday 14 February 2026 02:56:47 +0000 (0:00:02.606) 0:00:04.765 ***** 2026-02-14 02:57:02.205324 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:57:02.205413 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:57:02.205429 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:57:02.205442 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:57:02.205455 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:57:02.205468 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:57:02.205487 | orchestrator | 2026-02-14 02:57:02.205506 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-14 02:57:02.205525 | orchestrator | Saturday 14 February 2026 02:56:48 +0000 (0:00:01.162) 0:00:05.927 ***** 2026-02-14 02:57:02.205543 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:57:02.205591 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:57:02.205608 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:57:02.205627 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:57:02.205646 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:57:02.205663 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:57:02.205683 | orchestrator | 2026-02-14 02:57:02.205713 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-14 02:57:02.205729 | orchestrator | Saturday 14 February 2026 02:56:49 +0000 (0:00:00.971) 0:00:06.899 ***** 2026-02-14 02:57:02.205740 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.205751 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.205761 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.205773 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.205783 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.205794 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.205804 | orchestrator | 2026-02-14 02:57:02.205815 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-14 02:57:02.205826 | orchestrator | Saturday 14 February 2026 02:56:50 +0000 (0:00:00.769) 0:00:07.668 ***** 2026-02-14 02:57:02.205837 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.205847 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.205858 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.205869 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.205879 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.205890 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.205901 | orchestrator | 2026-02-14 02:57:02.205914 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-14 02:57:02.205933 | orchestrator | Saturday 14 February 2026 02:56:51 +0000 (0:00:00.587) 0:00:08.256 ***** 2026-02-14 02:57:02.205950 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.205967 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.205987 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.206000 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.206012 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.206078 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.206090 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.206106 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.206125 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.206143 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.206185 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.206204 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.206223 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.206242 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.206261 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.206276 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 02:57:02.206287 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 02:57:02.206298 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.206309 | orchestrator | 2026-02-14 02:57:02.206320 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-14 02:57:02.206331 | orchestrator | Saturday 14 February 2026 02:56:51 +0000 (0:00:00.587) 0:00:08.844 ***** 2026-02-14 02:57:02.206366 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.206377 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.206388 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.206411 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.206422 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.206433 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.206444 | orchestrator | 2026-02-14 02:57:02.206455 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-14 02:57:02.206467 | orchestrator | Saturday 14 February 2026 02:56:52 +0000 (0:00:01.078) 0:00:09.923 ***** 2026-02-14 02:57:02.206478 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:57:02.206489 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:57:02.206500 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:57:02.206511 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:57:02.206521 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:57:02.206532 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:57:02.206543 | orchestrator | 2026-02-14 02:57:02.206554 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-14 02:57:02.206565 | orchestrator | Saturday 14 February 2026 02:56:53 +0000 (0:00:00.927) 0:00:10.850 ***** 2026-02-14 02:57:02.206576 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:57:02.206587 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:57:02.206597 | orchestrator | changed: [testbed-node-3] 2026-02-14 02:57:02.206608 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:57:02.206619 | orchestrator | changed: [testbed-node-4] 2026-02-14 02:57:02.206630 | orchestrator | changed: [testbed-node-5] 2026-02-14 02:57:02.206640 | orchestrator | 2026-02-14 02:57:02.206651 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-14 02:57:02.206663 | orchestrator | Saturday 14 February 2026 02:56:58 +0000 (0:00:04.758) 0:00:15.609 ***** 2026-02-14 02:57:02.206673 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.206691 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.206703 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.206714 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.206725 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.206735 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.206746 | orchestrator | 2026-02-14 02:57:02.206757 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-14 02:57:02.206768 | orchestrator | Saturday 14 February 2026 02:56:59 +0000 (0:00:00.989) 0:00:16.598 ***** 2026-02-14 02:57:02.206779 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.206789 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.206800 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.206811 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.206821 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.206832 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.206843 | orchestrator | 2026-02-14 02:57:02.206854 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-14 02:57:02.206866 | orchestrator | Saturday 14 February 2026 02:57:00 +0000 (0:00:01.284) 0:00:17.883 ***** 2026-02-14 02:57:02.206877 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.206888 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.206898 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.206909 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.206920 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.206931 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.206941 | orchestrator | 2026-02-14 02:57:02.206952 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-14 02:57:02.206963 | orchestrator | Saturday 14 February 2026 02:57:01 +0000 (0:00:00.595) 0:00:18.479 ***** 2026-02-14 02:57:02.206974 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-14 02:57:02.206991 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-14 02:57:02.207002 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:57:02.207013 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-14 02:57:02.207030 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-14 02:57:02.207041 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:57:02.207052 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-14 02:57:02.207062 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-14 02:57:02.207073 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:57:02.207084 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-14 02:57:02.207095 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-14 02:57:02.207105 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:57:02.207116 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-14 02:57:02.207127 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-14 02:57:02.207137 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:57:02.207148 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-14 02:57:02.207159 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-14 02:57:02.207170 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:57:02.207181 | orchestrator | 2026-02-14 02:57:02.207191 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-14 02:57:02.207210 | orchestrator | Saturday 14 February 2026 02:57:02 +0000 (0:00:00.857) 0:00:19.336 ***** 2026-02-14 02:58:16.568603 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:58:16.568718 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:58:16.568734 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:58:16.568746 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.568758 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.568769 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.568781 | orchestrator | 2026-02-14 02:58:16.568794 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-14 02:58:16.568807 | orchestrator | Saturday 14 February 2026 02:57:02 +0000 (0:00:00.566) 0:00:19.902 ***** 2026-02-14 02:58:16.568818 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:58:16.568829 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:58:16.568840 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:58:16.568851 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.568862 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.568873 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.568884 | orchestrator | 2026-02-14 02:58:16.568895 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-14 02:58:16.568906 | orchestrator | 2026-02-14 02:58:16.568917 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-14 02:58:16.569010 | orchestrator | Saturday 14 February 2026 02:57:03 +0000 (0:00:01.128) 0:00:21.031 ***** 2026-02-14 02:58:16.569022 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.569034 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.569045 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.569057 | orchestrator | 2026-02-14 02:58:16.569068 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-14 02:58:16.569081 | orchestrator | Saturday 14 February 2026 02:57:04 +0000 (0:00:01.035) 0:00:22.066 ***** 2026-02-14 02:58:16.569094 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.569106 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.569118 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.569130 | orchestrator | 2026-02-14 02:58:16.569143 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-14 02:58:16.569155 | orchestrator | Saturday 14 February 2026 02:57:05 +0000 (0:00:01.086) 0:00:23.153 ***** 2026-02-14 02:58:16.569166 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.569177 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.569188 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.569199 | orchestrator | 2026-02-14 02:58:16.569211 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-14 02:58:16.569245 | orchestrator | Saturday 14 February 2026 02:57:06 +0000 (0:00:00.904) 0:00:24.058 ***** 2026-02-14 02:58:16.569257 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.569267 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.569278 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.569289 | orchestrator | 2026-02-14 02:58:16.569300 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-14 02:58:16.569311 | orchestrator | Saturday 14 February 2026 02:57:07 +0000 (0:00:00.632) 0:00:24.690 ***** 2026-02-14 02:58:16.569321 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.569332 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569343 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.569354 | orchestrator | 2026-02-14 02:58:16.569365 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-14 02:58:16.569406 | orchestrator | Saturday 14 February 2026 02:57:07 +0000 (0:00:00.294) 0:00:24.984 ***** 2026-02-14 02:58:16.569418 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.569429 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:16.569440 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:16.569450 | orchestrator | 2026-02-14 02:58:16.569461 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-14 02:58:16.569472 | orchestrator | Saturday 14 February 2026 02:57:08 +0000 (0:00:00.988) 0:00:25.972 ***** 2026-02-14 02:58:16.569483 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:16.569515 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:16.569527 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.569538 | orchestrator | 2026-02-14 02:58:16.569549 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-14 02:58:16.569559 | orchestrator | Saturday 14 February 2026 02:57:10 +0000 (0:00:01.602) 0:00:27.575 ***** 2026-02-14 02:58:16.569570 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 02:58:16.569581 | orchestrator | 2026-02-14 02:58:16.569592 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-14 02:58:16.569603 | orchestrator | Saturday 14 February 2026 02:57:10 +0000 (0:00:00.519) 0:00:28.094 ***** 2026-02-14 02:58:16.569614 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.569624 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.569635 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.569646 | orchestrator | 2026-02-14 02:58:16.569657 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-14 02:58:16.569668 | orchestrator | Saturday 14 February 2026 02:57:13 +0000 (0:00:02.433) 0:00:30.528 ***** 2026-02-14 02:58:16.569678 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569689 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.569700 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.569711 | orchestrator | 2026-02-14 02:58:16.569721 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-14 02:58:16.569732 | orchestrator | Saturday 14 February 2026 02:57:13 +0000 (0:00:00.547) 0:00:31.075 ***** 2026-02-14 02:58:16.569743 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569754 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.569765 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.569775 | orchestrator | 2026-02-14 02:58:16.569786 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-14 02:58:16.569797 | orchestrator | Saturday 14 February 2026 02:57:14 +0000 (0:00:00.987) 0:00:32.063 ***** 2026-02-14 02:58:16.569807 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569818 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.569829 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.569840 | orchestrator | 2026-02-14 02:58:16.569851 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-14 02:58:16.569882 | orchestrator | Saturday 14 February 2026 02:57:16 +0000 (0:00:01.166) 0:00:33.230 ***** 2026-02-14 02:58:16.569894 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.569915 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569926 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.569937 | orchestrator | 2026-02-14 02:58:16.569948 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-14 02:58:16.569959 | orchestrator | Saturday 14 February 2026 02:57:16 +0000 (0:00:00.403) 0:00:33.633 ***** 2026-02-14 02:58:16.569970 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.569981 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.569991 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.570002 | orchestrator | 2026-02-14 02:58:16.570013 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-14 02:58:16.570088 | orchestrator | Saturday 14 February 2026 02:57:16 +0000 (0:00:00.271) 0:00:33.904 ***** 2026-02-14 02:58:16.570100 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:16.570110 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:16.570121 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:16.570132 | orchestrator | 2026-02-14 02:58:16.570150 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-14 02:58:16.570171 | orchestrator | Saturday 14 February 2026 02:57:17 +0000 (0:00:01.084) 0:00:34.989 ***** 2026-02-14 02:58:16.570182 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.570193 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.570204 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.570215 | orchestrator | 2026-02-14 02:58:16.570225 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-14 02:58:16.570237 | orchestrator | Saturday 14 February 2026 02:57:21 +0000 (0:00:03.313) 0:00:38.302 ***** 2026-02-14 02:58:16.570248 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.570258 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.570269 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.570284 | orchestrator | 2026-02-14 02:58:16.570296 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-14 02:58:16.570307 | orchestrator | Saturday 14 February 2026 02:57:21 +0000 (0:00:00.327) 0:00:38.630 ***** 2026-02-14 02:58:16.570319 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 02:58:16.570331 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 02:58:16.570342 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 02:58:16.570353 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 02:58:16.570364 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 02:58:16.570375 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 02:58:16.570386 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-14 02:58:16.570396 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-14 02:58:16.570407 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-14 02:58:16.570418 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-14 02:58:16.570429 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-14 02:58:16.570447 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-14 02:58:16.570458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-14 02:58:16.570469 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-14 02:58:16.570480 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-14 02:58:16.570490 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:16.570524 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:16.570535 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:16.570545 | orchestrator | 2026-02-14 02:58:16.570562 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-14 02:58:16.570574 | orchestrator | Saturday 14 February 2026 02:58:15 +0000 (0:00:53.809) 0:01:32.439 ***** 2026-02-14 02:58:16.570584 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:16.570595 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:16.570606 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:16.570617 | orchestrator | 2026-02-14 02:58:16.570628 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-14 02:58:16.570639 | orchestrator | Saturday 14 February 2026 02:58:15 +0000 (0:00:00.294) 0:01:32.734 ***** 2026-02-14 02:58:16.570659 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.242206 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.242335 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.242353 | orchestrator | 2026-02-14 02:58:58.242366 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-14 02:58:58.242379 | orchestrator | Saturday 14 February 2026 02:58:16 +0000 (0:00:00.977) 0:01:33.712 ***** 2026-02-14 02:58:58.242390 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.242402 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.242413 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.242424 | orchestrator | 2026-02-14 02:58:58.242435 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-14 02:58:58.242446 | orchestrator | Saturday 14 February 2026 02:58:17 +0000 (0:00:01.153) 0:01:34.865 ***** 2026-02-14 02:58:58.242457 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.242468 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.242479 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.242490 | orchestrator | 2026-02-14 02:58:58.242501 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-14 02:58:58.242512 | orchestrator | Saturday 14 February 2026 02:58:44 +0000 (0:00:26.498) 0:02:01.364 ***** 2026-02-14 02:58:58.242523 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.242536 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.242547 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.242557 | orchestrator | 2026-02-14 02:58:58.242568 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-14 02:58:58.242579 | orchestrator | Saturday 14 February 2026 02:58:44 +0000 (0:00:00.626) 0:02:01.991 ***** 2026-02-14 02:58:58.242624 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.242635 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.242646 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.242656 | orchestrator | 2026-02-14 02:58:58.242667 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-14 02:58:58.242681 | orchestrator | Saturday 14 February 2026 02:58:45 +0000 (0:00:00.608) 0:02:02.599 ***** 2026-02-14 02:58:58.242693 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.242705 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.242717 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.242729 | orchestrator | 2026-02-14 02:58:58.242741 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-14 02:58:58.242781 | orchestrator | Saturday 14 February 2026 02:58:46 +0000 (0:00:00.627) 0:02:03.227 ***** 2026-02-14 02:58:58.242794 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.242807 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.242819 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.242831 | orchestrator | 2026-02-14 02:58:58.242842 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-14 02:58:58.242854 | orchestrator | Saturday 14 February 2026 02:58:46 +0000 (0:00:00.791) 0:02:04.019 ***** 2026-02-14 02:58:58.242867 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.242879 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.242891 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.242903 | orchestrator | 2026-02-14 02:58:58.242915 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-14 02:58:58.242928 | orchestrator | Saturday 14 February 2026 02:58:47 +0000 (0:00:00.304) 0:02:04.323 ***** 2026-02-14 02:58:58.242940 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.242952 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.242964 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.242976 | orchestrator | 2026-02-14 02:58:58.242988 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-14 02:58:58.243000 | orchestrator | Saturday 14 February 2026 02:58:47 +0000 (0:00:00.620) 0:02:04.944 ***** 2026-02-14 02:58:58.243012 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.243023 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.243034 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.243044 | orchestrator | 2026-02-14 02:58:58.243055 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-14 02:58:58.243066 | orchestrator | Saturday 14 February 2026 02:58:48 +0000 (0:00:00.633) 0:02:05.578 ***** 2026-02-14 02:58:58.243077 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.243087 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.243098 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.243109 | orchestrator | 2026-02-14 02:58:58.243120 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-14 02:58:58.243131 | orchestrator | Saturday 14 February 2026 02:58:49 +0000 (0:00:00.854) 0:02:06.433 ***** 2026-02-14 02:58:58.243145 | orchestrator | changed: [testbed-node-0] 2026-02-14 02:58:58.243156 | orchestrator | changed: [testbed-node-2] 2026-02-14 02:58:58.243166 | orchestrator | changed: [testbed-node-1] 2026-02-14 02:58:58.243191 | orchestrator | 2026-02-14 02:58:58.243202 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-14 02:58:58.243213 | orchestrator | Saturday 14 February 2026 02:58:50 +0000 (0:00:01.018) 0:02:07.451 ***** 2026-02-14 02:58:58.243223 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:58.243234 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:58.243245 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:58.243255 | orchestrator | 2026-02-14 02:58:58.243266 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-14 02:58:58.243276 | orchestrator | Saturday 14 February 2026 02:58:50 +0000 (0:00:00.279) 0:02:07.731 ***** 2026-02-14 02:58:58.243287 | orchestrator | skipping: [testbed-node-0] 2026-02-14 02:58:58.243297 | orchestrator | skipping: [testbed-node-1] 2026-02-14 02:58:58.243308 | orchestrator | skipping: [testbed-node-2] 2026-02-14 02:58:58.243318 | orchestrator | 2026-02-14 02:58:58.243329 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-14 02:58:58.243340 | orchestrator | Saturday 14 February 2026 02:58:50 +0000 (0:00:00.300) 0:02:08.031 ***** 2026-02-14 02:58:58.243350 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.243361 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.243372 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.243382 | orchestrator | 2026-02-14 02:58:58.243393 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-14 02:58:58.243403 | orchestrator | Saturday 14 February 2026 02:58:51 +0000 (0:00:00.585) 0:02:08.617 ***** 2026-02-14 02:58:58.243423 | orchestrator | ok: [testbed-node-0] 2026-02-14 02:58:58.243434 | orchestrator | ok: [testbed-node-1] 2026-02-14 02:58:58.243462 | orchestrator | ok: [testbed-node-2] 2026-02-14 02:58:58.243474 | orchestrator | 2026-02-14 02:58:58.243485 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-14 02:58:58.243498 | orchestrator | Saturday 14 February 2026 02:58:52 +0000 (0:00:00.817) 0:02:09.435 ***** 2026-02-14 02:58:58.243509 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 02:58:58.243520 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 02:58:58.243530 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 02:58:58.243541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 02:58:58.243552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 02:58:58.243563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 02:58:58.243574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 02:58:58.243610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 02:58:58.243627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 02:58:58.243645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-14 02:58:58.243662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 02:58:58.243675 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 02:58:58.243686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-14 02:58:58.243697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 02:58:58.243708 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 02:58:58.243719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 02:58:58.243729 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 02:58:58.243740 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 02:58:58.243751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 02:58:58.243762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 02:58:58.243772 | orchestrator | 2026-02-14 02:58:58.243783 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-14 02:58:58.243794 | orchestrator | 2026-02-14 02:58:58.243805 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-14 02:58:58.243816 | orchestrator | Saturday 14 February 2026 02:58:55 +0000 (0:00:03.101) 0:02:12.537 ***** 2026-02-14 02:58:58.243826 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:58:58.243837 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:58:58.243848 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:58:58.243859 | orchestrator | 2026-02-14 02:58:58.243886 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-14 02:58:58.243898 | orchestrator | Saturday 14 February 2026 02:58:55 +0000 (0:00:00.307) 0:02:12.844 ***** 2026-02-14 02:58:58.243908 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:58:58.243919 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:58:58.243930 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:58:58.243948 | orchestrator | 2026-02-14 02:58:58.243959 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-14 02:58:58.243970 | orchestrator | Saturday 14 February 2026 02:58:56 +0000 (0:00:00.803) 0:02:13.648 ***** 2026-02-14 02:58:58.243981 | orchestrator | ok: [testbed-node-3] 2026-02-14 02:58:58.243992 | orchestrator | ok: [testbed-node-4] 2026-02-14 02:58:58.244002 | orchestrator | ok: [testbed-node-5] 2026-02-14 02:58:58.244013 | orchestrator | 2026-02-14 02:58:58.244024 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-14 02:58:58.244035 | orchestrator | Saturday 14 February 2026 02:58:56 +0000 (0:00:00.302) 0:02:13.951 ***** 2026-02-14 02:58:58.244046 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 02:58:58.244057 | orchestrator | 2026-02-14 02:58:58.244068 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-14 02:58:58.244079 | orchestrator | Saturday 14 February 2026 02:58:57 +0000 (0:00:00.485) 0:02:14.436 ***** 2026-02-14 02:58:58.244090 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:58:58.244101 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:58:58.244111 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:58:58.244122 | orchestrator | 2026-02-14 02:58:58.244133 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-14 02:58:58.244144 | orchestrator | Saturday 14 February 2026 02:58:57 +0000 (0:00:00.484) 0:02:14.920 ***** 2026-02-14 02:58:58.244154 | orchestrator | skipping: [testbed-node-3] 2026-02-14 02:58:58.244165 | orchestrator | skipping: [testbed-node-4] 2026-02-14 02:58:58.244176 | orchestrator | skipping: [testbed-node-5] 2026-02-14 02:58:58.244187 | orchestrator | 2026-02-14 02:58:58.244198 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-14 02:58:58.244209 | orchestrator | Saturday 14 February 2026 02:58:58 +0000 (0:00:00.301) 0:02:15.222 ***** 2026-02-14 02:58:58.244227 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:00:35.035484 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:00:35.035581 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:00:35.035593 | orchestrator | 2026-02-14 03:00:35.035602 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-14 03:00:35.035610 | orchestrator | Saturday 14 February 2026 02:58:58 +0000 (0:00:00.290) 0:02:15.512 ***** 2026-02-14 03:00:35.035618 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:00:35.035625 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:00:35.035632 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:00:35.035639 | orchestrator | 2026-02-14 03:00:35.035645 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-14 03:00:35.035651 | orchestrator | Saturday 14 February 2026 02:58:58 +0000 (0:00:00.623) 0:02:16.135 ***** 2026-02-14 03:00:35.035658 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:00:35.035665 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:00:35.035672 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:00:35.035679 | orchestrator | 2026-02-14 03:00:35.035686 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-14 03:00:35.035693 | orchestrator | Saturday 14 February 2026 02:59:00 +0000 (0:00:01.356) 0:02:17.491 ***** 2026-02-14 03:00:35.035700 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:00:35.035707 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:00:35.035714 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:00:35.035721 | orchestrator | 2026-02-14 03:00:35.035728 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-14 03:00:35.035735 | orchestrator | Saturday 14 February 2026 02:59:01 +0000 (0:00:01.183) 0:02:18.675 ***** 2026-02-14 03:00:35.035740 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:00:35.035747 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:00:35.035753 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:00:35.035760 | orchestrator | 2026-02-14 03:00:35.035766 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-14 03:00:35.035824 | orchestrator | 2026-02-14 03:00:35.035832 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-14 03:00:35.035839 | orchestrator | Saturday 14 February 2026 02:59:11 +0000 (0:00:09.784) 0:02:28.460 ***** 2026-02-14 03:00:35.035845 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.035853 | orchestrator | 2026-02-14 03:00:35.035860 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-14 03:00:35.035867 | orchestrator | Saturday 14 February 2026 02:59:12 +0000 (0:00:00.975) 0:02:29.435 ***** 2026-02-14 03:00:35.035874 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.035880 | orchestrator | 2026-02-14 03:00:35.035887 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-14 03:00:35.035894 | orchestrator | Saturday 14 February 2026 02:59:12 +0000 (0:00:00.458) 0:02:29.894 ***** 2026-02-14 03:00:35.035900 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-14 03:00:35.035907 | orchestrator | 2026-02-14 03:00:35.035914 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-14 03:00:35.035921 | orchestrator | Saturday 14 February 2026 02:59:13 +0000 (0:00:00.554) 0:02:30.448 ***** 2026-02-14 03:00:35.035927 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.035933 | orchestrator | 2026-02-14 03:00:35.035940 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-14 03:00:35.035947 | orchestrator | Saturday 14 February 2026 02:59:14 +0000 (0:00:00.872) 0:02:31.321 ***** 2026-02-14 03:00:35.035954 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.035960 | orchestrator | 2026-02-14 03:00:35.035967 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-14 03:00:35.035974 | orchestrator | Saturday 14 February 2026 02:59:14 +0000 (0:00:00.587) 0:02:31.908 ***** 2026-02-14 03:00:35.035980 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-14 03:00:35.035987 | orchestrator | 2026-02-14 03:00:35.035994 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-14 03:00:35.036000 | orchestrator | Saturday 14 February 2026 02:59:16 +0000 (0:00:01.502) 0:02:33.410 ***** 2026-02-14 03:00:35.036007 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-14 03:00:35.036014 | orchestrator | 2026-02-14 03:00:35.036037 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-14 03:00:35.036045 | orchestrator | Saturday 14 February 2026 02:59:17 +0000 (0:00:00.831) 0:02:34.242 ***** 2026-02-14 03:00:35.036051 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.036058 | orchestrator | 2026-02-14 03:00:35.036064 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-14 03:00:35.036071 | orchestrator | Saturday 14 February 2026 02:59:17 +0000 (0:00:00.429) 0:02:34.672 ***** 2026-02-14 03:00:35.036078 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.036085 | orchestrator | 2026-02-14 03:00:35.036092 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-14 03:00:35.036099 | orchestrator | 2026-02-14 03:00:35.036105 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-14 03:00:35.036114 | orchestrator | Saturday 14 February 2026 02:59:17 +0000 (0:00:00.454) 0:02:35.126 ***** 2026-02-14 03:00:35.036121 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.036128 | orchestrator | 2026-02-14 03:00:35.036135 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-14 03:00:35.036141 | orchestrator | Saturday 14 February 2026 02:59:18 +0000 (0:00:00.360) 0:02:35.486 ***** 2026-02-14 03:00:35.036148 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 03:00:35.036157 | orchestrator | 2026-02-14 03:00:35.036164 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-14 03:00:35.036171 | orchestrator | Saturday 14 February 2026 02:59:18 +0000 (0:00:00.229) 0:02:35.715 ***** 2026-02-14 03:00:35.036178 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.036186 | orchestrator | 2026-02-14 03:00:35.036201 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-14 03:00:35.036209 | orchestrator | Saturday 14 February 2026 02:59:19 +0000 (0:00:00.768) 0:02:36.483 ***** 2026-02-14 03:00:35.036216 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.036223 | orchestrator | 2026-02-14 03:00:35.036246 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-14 03:00:35.036255 | orchestrator | Saturday 14 February 2026 02:59:20 +0000 (0:00:01.560) 0:02:38.044 ***** 2026-02-14 03:00:35.036262 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.036269 | orchestrator | 2026-02-14 03:00:35.036276 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-14 03:00:35.036283 | orchestrator | Saturday 14 February 2026 02:59:21 +0000 (0:00:00.771) 0:02:38.816 ***** 2026-02-14 03:00:35.036291 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.036299 | orchestrator | 2026-02-14 03:00:35.036306 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-14 03:00:35.036313 | orchestrator | Saturday 14 February 2026 02:59:22 +0000 (0:00:00.475) 0:02:39.292 ***** 2026-02-14 03:00:35.036321 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.036328 | orchestrator | 2026-02-14 03:00:35.036336 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-14 03:00:35.036343 | orchestrator | Saturday 14 February 2026 02:59:29 +0000 (0:00:07.207) 0:02:46.500 ***** 2026-02-14 03:00:35.036351 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:35.036359 | orchestrator | 2026-02-14 03:00:35.036367 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-14 03:00:35.036374 | orchestrator | Saturday 14 February 2026 02:59:41 +0000 (0:00:12.206) 0:02:58.707 ***** 2026-02-14 03:00:35.036382 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:35.036389 | orchestrator | 2026-02-14 03:00:35.036396 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-14 03:00:35.036404 | orchestrator | 2026-02-14 03:00:35.036411 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-14 03:00:35.036418 | orchestrator | Saturday 14 February 2026 02:59:42 +0000 (0:00:00.728) 0:02:59.435 ***** 2026-02-14 03:00:35.036425 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:00:35.036431 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:00:35.036437 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:00:35.036445 | orchestrator | 2026-02-14 03:00:35.036451 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-14 03:00:35.036458 | orchestrator | Saturday 14 February 2026 02:59:42 +0000 (0:00:00.331) 0:02:59.767 ***** 2026-02-14 03:00:35.036465 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036472 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:00:35.036479 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:00:35.036486 | orchestrator | 2026-02-14 03:00:35.036493 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-14 03:00:35.036501 | orchestrator | Saturday 14 February 2026 02:59:42 +0000 (0:00:00.291) 0:03:00.058 ***** 2026-02-14 03:00:35.036508 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:00:35.036515 | orchestrator | 2026-02-14 03:00:35.036522 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-14 03:00:35.036529 | orchestrator | Saturday 14 February 2026 02:59:43 +0000 (0:00:00.649) 0:03:00.708 ***** 2026-02-14 03:00:35.036536 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 03:00:35.036543 | orchestrator | 2026-02-14 03:00:35.036550 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-14 03:00:35.036557 | orchestrator | Saturday 14 February 2026 02:59:44 +0000 (0:00:00.798) 0:03:01.506 ***** 2026-02-14 03:00:35.036565 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:00:35.036572 | orchestrator | 2026-02-14 03:00:35.036579 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-14 03:00:35.036600 | orchestrator | Saturday 14 February 2026 02:59:45 +0000 (0:00:00.865) 0:03:02.372 ***** 2026-02-14 03:00:35.036607 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036614 | orchestrator | 2026-02-14 03:00:35.036621 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-14 03:00:35.036628 | orchestrator | Saturday 14 February 2026 02:59:45 +0000 (0:00:00.109) 0:03:02.481 ***** 2026-02-14 03:00:35.036634 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:00:35.036641 | orchestrator | 2026-02-14 03:00:35.036648 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-14 03:00:35.036656 | orchestrator | Saturday 14 February 2026 02:59:46 +0000 (0:00:00.954) 0:03:03.436 ***** 2026-02-14 03:00:35.036662 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036669 | orchestrator | 2026-02-14 03:00:35.036676 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-14 03:00:35.036683 | orchestrator | Saturday 14 February 2026 02:59:46 +0000 (0:00:00.119) 0:03:03.555 ***** 2026-02-14 03:00:35.036689 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036696 | orchestrator | 2026-02-14 03:00:35.036703 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-14 03:00:35.036710 | orchestrator | Saturday 14 February 2026 02:59:46 +0000 (0:00:00.107) 0:03:03.663 ***** 2026-02-14 03:00:35.036717 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036723 | orchestrator | 2026-02-14 03:00:35.036730 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-14 03:00:35.036742 | orchestrator | Saturday 14 February 2026 02:59:46 +0000 (0:00:00.118) 0:03:03.781 ***** 2026-02-14 03:00:35.036748 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:35.036754 | orchestrator | 2026-02-14 03:00:35.036760 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-14 03:00:35.036766 | orchestrator | Saturday 14 February 2026 02:59:46 +0000 (0:00:00.120) 0:03:03.902 ***** 2026-02-14 03:00:35.036794 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 03:00:35.036800 | orchestrator | 2026-02-14 03:00:35.036806 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-14 03:00:35.036812 | orchestrator | Saturday 14 February 2026 02:59:52 +0000 (0:00:06.191) 0:03:10.094 ***** 2026-02-14 03:00:35.036818 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-14 03:00:35.036825 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-14 03:00:35.036842 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-14 03:00:57.605527 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-14 03:00:57.605671 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-14 03:00:57.605700 | orchestrator | 2026-02-14 03:00:57.605720 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-14 03:00:57.605739 | orchestrator | Saturday 14 February 2026 03:00:35 +0000 (0:00:42.081) 0:03:52.176 ***** 2026-02-14 03:00:57.605756 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:00:57.605773 | orchestrator | 2026-02-14 03:00:57.605791 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-14 03:00:57.605842 | orchestrator | Saturday 14 February 2026 03:00:36 +0000 (0:00:01.217) 0:03:53.393 ***** 2026-02-14 03:00:57.605863 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 03:00:57.605881 | orchestrator | 2026-02-14 03:00:57.605900 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-14 03:00:57.605917 | orchestrator | Saturday 14 February 2026 03:00:37 +0000 (0:00:01.734) 0:03:55.127 ***** 2026-02-14 03:00:57.605935 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 03:00:57.605952 | orchestrator | 2026-02-14 03:00:57.605970 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-14 03:00:57.605989 | orchestrator | Saturday 14 February 2026 03:00:39 +0000 (0:00:01.050) 0:03:56.178 ***** 2026-02-14 03:00:57.606162 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:57.606188 | orchestrator | 2026-02-14 03:00:57.606208 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-14 03:00:57.606226 | orchestrator | Saturday 14 February 2026 03:00:39 +0000 (0:00:00.133) 0:03:56.312 ***** 2026-02-14 03:00:57.606246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-14 03:00:57.606266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-14 03:00:57.606283 | orchestrator | 2026-02-14 03:00:57.606302 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-14 03:00:57.606320 | orchestrator | Saturday 14 February 2026 03:00:40 +0000 (0:00:01.836) 0:03:58.148 ***** 2026-02-14 03:00:57.606339 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:57.606358 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:00:57.606377 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:00:57.606395 | orchestrator | 2026-02-14 03:00:57.606413 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-14 03:00:57.606432 | orchestrator | Saturday 14 February 2026 03:00:41 +0000 (0:00:00.309) 0:03:58.458 ***** 2026-02-14 03:00:57.606451 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:00:57.606469 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:00:57.606488 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:00:57.606507 | orchestrator | 2026-02-14 03:00:57.606528 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-14 03:00:57.606548 | orchestrator | 2026-02-14 03:00:57.606568 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-14 03:00:57.606587 | orchestrator | Saturday 14 February 2026 03:00:42 +0000 (0:00:00.835) 0:03:59.293 ***** 2026-02-14 03:00:57.606606 | orchestrator | ok: [testbed-manager] 2026-02-14 03:00:57.606623 | orchestrator | 2026-02-14 03:00:57.606642 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-14 03:00:57.606661 | orchestrator | Saturday 14 February 2026 03:00:42 +0000 (0:00:00.347) 0:03:59.641 ***** 2026-02-14 03:00:57.606678 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 03:00:57.606696 | orchestrator | 2026-02-14 03:00:57.606715 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-14 03:00:57.606734 | orchestrator | Saturday 14 February 2026 03:00:42 +0000 (0:00:00.234) 0:03:59.875 ***** 2026-02-14 03:00:57.606751 | orchestrator | changed: [testbed-manager] 2026-02-14 03:00:57.606768 | orchestrator | 2026-02-14 03:00:57.606780 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-14 03:00:57.606791 | orchestrator | 2026-02-14 03:00:57.606802 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-14 03:00:57.606878 | orchestrator | Saturday 14 February 2026 03:00:48 +0000 (0:00:05.345) 0:04:05.220 ***** 2026-02-14 03:00:57.606892 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:00:57.606904 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:00:57.606915 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:00:57.606925 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:00:57.606936 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:00:57.606947 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:00:57.606957 | orchestrator | 2026-02-14 03:00:57.606968 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-14 03:00:57.606979 | orchestrator | Saturday 14 February 2026 03:00:48 +0000 (0:00:00.781) 0:04:06.002 ***** 2026-02-14 03:00:57.606990 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 03:00:57.607003 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 03:00:57.607022 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 03:00:57.607042 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 03:00:57.607092 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 03:00:57.607114 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 03:00:57.607132 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 03:00:57.607149 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 03:00:57.607166 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 03:00:57.607211 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 03:00:57.607232 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 03:00:57.607253 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 03:00:57.607272 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 03:00:57.607286 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 03:00:57.607298 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 03:00:57.607329 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 03:00:57.607340 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 03:00:57.607351 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 03:00:57.607362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 03:00:57.607373 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 03:00:57.607384 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 03:00:57.607395 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 03:00:57.607406 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 03:00:57.607417 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 03:00:57.607427 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 03:00:57.607438 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 03:00:57.607449 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 03:00:57.607460 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 03:00:57.607471 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 03:00:57.607481 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 03:00:57.607492 | orchestrator | 2026-02-14 03:00:57.607503 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-14 03:00:57.607514 | orchestrator | Saturday 14 February 2026 03:00:56 +0000 (0:00:07.580) 0:04:13.582 ***** 2026-02-14 03:00:57.607525 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:00:57.607536 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:00:57.607547 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:00:57.607557 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:57.607568 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:00:57.607579 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:00:57.607590 | orchestrator | 2026-02-14 03:00:57.607600 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-14 03:00:57.607611 | orchestrator | Saturday 14 February 2026 03:00:56 +0000 (0:00:00.510) 0:04:14.092 ***** 2026-02-14 03:00:57.607622 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:00:57.607642 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:00:57.607658 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:00:57.607682 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:00:57.607708 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:00:57.607727 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:00:57.607744 | orchestrator | 2026-02-14 03:00:57.607762 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:00:57.607780 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:00:57.607802 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-14 03:00:57.607895 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 03:00:57.607908 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 03:00:57.607919 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 03:00:57.607930 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 03:00:57.607941 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 03:00:57.607952 | orchestrator | 2026-02-14 03:00:57.607963 | orchestrator | 2026-02-14 03:00:57.607974 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:00:57.607985 | orchestrator | Saturday 14 February 2026 03:00:57 +0000 (0:00:00.645) 0:04:14.738 ***** 2026-02-14 03:00:57.608009 | orchestrator | =============================================================================== 2026-02-14 03:00:57.970330 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.81s 2026-02-14 03:00:57.970430 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.08s 2026-02-14 03:00:57.970446 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.50s 2026-02-14 03:00:57.970459 | orchestrator | kubectl : Install required packages ------------------------------------ 12.21s 2026-02-14 03:00:57.970470 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.78s 2026-02-14 03:00:57.970482 | orchestrator | Manage labels ----------------------------------------------------------- 7.58s 2026-02-14 03:00:57.970493 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.21s 2026-02-14 03:00:57.970504 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.19s 2026-02-14 03:00:57.970516 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.35s 2026-02-14 03:00:57.970527 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.76s 2026-02-14 03:00:57.970538 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.31s 2026-02-14 03:00:57.970550 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.10s 2026-02-14 03:00:57.970563 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.61s 2026-02-14 03:00:57.970574 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.43s 2026-02-14 03:00:57.970585 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.84s 2026-02-14 03:00:57.970596 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.73s 2026-02-14 03:00:57.970607 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.60s 2026-02-14 03:00:57.970649 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.56s 2026-02-14 03:00:57.970661 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.50s 2026-02-14 03:00:57.970672 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.36s 2026-02-14 03:00:58.262788 | orchestrator | + osism apply copy-kubeconfig 2026-02-14 03:01:10.309568 | orchestrator | 2026-02-14 03:01:10 | INFO  | Task e9548669-4bc1-4c97-8102-c7bf127c4f13 (copy-kubeconfig) was prepared for execution. 2026-02-14 03:01:10.309680 | orchestrator | 2026-02-14 03:01:10 | INFO  | It takes a moment until task e9548669-4bc1-4c97-8102-c7bf127c4f13 (copy-kubeconfig) has been started and output is visible here. 2026-02-14 03:01:17.245218 | orchestrator | 2026-02-14 03:01:17.245332 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-14 03:01:17.245348 | orchestrator | 2026-02-14 03:01:17.245361 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-14 03:01:17.245372 | orchestrator | Saturday 14 February 2026 03:01:14 +0000 (0:00:00.153) 0:00:00.153 ***** 2026-02-14 03:01:17.245384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-14 03:01:17.245395 | orchestrator | 2026-02-14 03:01:17.245407 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-14 03:01:17.245418 | orchestrator | Saturday 14 February 2026 03:01:15 +0000 (0:00:00.768) 0:00:00.921 ***** 2026-02-14 03:01:17.245449 | orchestrator | changed: [testbed-manager] 2026-02-14 03:01:17.245463 | orchestrator | 2026-02-14 03:01:17.245474 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-14 03:01:17.245485 | orchestrator | Saturday 14 February 2026 03:01:16 +0000 (0:00:01.259) 0:00:02.180 ***** 2026-02-14 03:01:17.245502 | orchestrator | changed: [testbed-manager] 2026-02-14 03:01:17.245521 | orchestrator | 2026-02-14 03:01:17.245544 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:01:17.245563 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:01:17.245582 | orchestrator | 2026-02-14 03:01:17.245600 | orchestrator | 2026-02-14 03:01:17.245618 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:01:17.245636 | orchestrator | Saturday 14 February 2026 03:01:16 +0000 (0:00:00.463) 0:00:02.643 ***** 2026-02-14 03:01:17.245654 | orchestrator | =============================================================================== 2026-02-14 03:01:17.245673 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2026-02-14 03:01:17.245692 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2026-02-14 03:01:17.245712 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.46s 2026-02-14 03:01:17.567990 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-14 03:01:29.757440 | orchestrator | 2026-02-14 03:01:29 | INFO  | Task 1debdeee-96e5-4906-a38b-a3e6fa641080 (openstackclient) was prepared for execution. 2026-02-14 03:01:29.757587 | orchestrator | 2026-02-14 03:01:29 | INFO  | It takes a moment until task 1debdeee-96e5-4906-a38b-a3e6fa641080 (openstackclient) has been started and output is visible here. 2026-02-14 03:02:16.629187 | orchestrator | 2026-02-14 03:02:16.629300 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-14 03:02:16.629316 | orchestrator | 2026-02-14 03:02:16.629328 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-14 03:02:16.629339 | orchestrator | Saturday 14 February 2026 03:01:33 +0000 (0:00:00.225) 0:00:00.225 ***** 2026-02-14 03:02:16.629352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-14 03:02:16.629364 | orchestrator | 2026-02-14 03:02:16.629400 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-14 03:02:16.629412 | orchestrator | Saturday 14 February 2026 03:01:34 +0000 (0:00:00.230) 0:00:00.456 ***** 2026-02-14 03:02:16.629423 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-14 03:02:16.629434 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-14 03:02:16.629446 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-14 03:02:16.629457 | orchestrator | 2026-02-14 03:02:16.629468 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-14 03:02:16.629479 | orchestrator | Saturday 14 February 2026 03:01:35 +0000 (0:00:01.283) 0:00:01.739 ***** 2026-02-14 03:02:16.629490 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:16.629501 | orchestrator | 2026-02-14 03:02:16.629512 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-14 03:02:16.629523 | orchestrator | Saturday 14 February 2026 03:01:36 +0000 (0:00:01.361) 0:00:03.101 ***** 2026-02-14 03:02:16.629534 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-14 03:02:16.629546 | orchestrator | ok: [testbed-manager] 2026-02-14 03:02:16.629558 | orchestrator | 2026-02-14 03:02:16.629569 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-14 03:02:16.629580 | orchestrator | Saturday 14 February 2026 03:02:11 +0000 (0:00:34.615) 0:00:37.716 ***** 2026-02-14 03:02:16.629591 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:16.629601 | orchestrator | 2026-02-14 03:02:16.629612 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-14 03:02:16.629624 | orchestrator | Saturday 14 February 2026 03:02:12 +0000 (0:00:00.945) 0:00:38.661 ***** 2026-02-14 03:02:16.629635 | orchestrator | ok: [testbed-manager] 2026-02-14 03:02:16.629645 | orchestrator | 2026-02-14 03:02:16.629656 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-14 03:02:16.629667 | orchestrator | Saturday 14 February 2026 03:02:13 +0000 (0:00:00.629) 0:00:39.291 ***** 2026-02-14 03:02:16.629678 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:16.629689 | orchestrator | 2026-02-14 03:02:16.629701 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-14 03:02:16.629711 | orchestrator | Saturday 14 February 2026 03:02:14 +0000 (0:00:01.464) 0:00:40.755 ***** 2026-02-14 03:02:16.629723 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:16.629735 | orchestrator | 2026-02-14 03:02:16.629748 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-14 03:02:16.629761 | orchestrator | Saturday 14 February 2026 03:02:15 +0000 (0:00:00.699) 0:00:41.455 ***** 2026-02-14 03:02:16.629774 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:16.629786 | orchestrator | 2026-02-14 03:02:16.629799 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-14 03:02:16.629812 | orchestrator | Saturday 14 February 2026 03:02:15 +0000 (0:00:00.607) 0:00:42.063 ***** 2026-02-14 03:02:16.629824 | orchestrator | ok: [testbed-manager] 2026-02-14 03:02:16.629837 | orchestrator | 2026-02-14 03:02:16.629849 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:02:16.629862 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:02:16.629875 | orchestrator | 2026-02-14 03:02:16.629888 | orchestrator | 2026-02-14 03:02:16.629901 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:02:16.629912 | orchestrator | Saturday 14 February 2026 03:02:16 +0000 (0:00:00.418) 0:00:42.481 ***** 2026-02-14 03:02:16.629923 | orchestrator | =============================================================================== 2026-02-14 03:02:16.629933 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.62s 2026-02-14 03:02:16.629944 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.46s 2026-02-14 03:02:16.629993 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.36s 2026-02-14 03:02:16.630005 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.28s 2026-02-14 03:02:16.630083 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.95s 2026-02-14 03:02:16.630097 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.70s 2026-02-14 03:02:16.630108 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2026-02-14 03:02:16.630119 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.61s 2026-02-14 03:02:16.630130 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2026-02-14 03:02:16.630141 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-02-14 03:02:18.979716 | orchestrator | 2026-02-14 03:02:18 | INFO  | Task 9d3b20fc-210b-4753-bf79-1af82772dbd3 (common) was prepared for execution. 2026-02-14 03:02:18.979800 | orchestrator | 2026-02-14 03:02:18 | INFO  | It takes a moment until task 9d3b20fc-210b-4753-bf79-1af82772dbd3 (common) has been started and output is visible here. 2026-02-14 03:02:31.076953 | orchestrator | 2026-02-14 03:02:31.077112 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-14 03:02:31.077131 | orchestrator | 2026-02-14 03:02:31.077143 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 03:02:31.077154 | orchestrator | Saturday 14 February 2026 03:02:23 +0000 (0:00:00.272) 0:00:00.272 ***** 2026-02-14 03:02:31.077166 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:02:31.077180 | orchestrator | 2026-02-14 03:02:31.077191 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-14 03:02:31.077201 | orchestrator | Saturday 14 February 2026 03:02:24 +0000 (0:00:01.297) 0:00:01.569 ***** 2026-02-14 03:02:31.077213 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077223 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077235 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077246 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077257 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077275 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077294 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077311 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077329 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077370 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 03:02:31.077389 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077406 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077426 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077445 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077464 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077483 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077503 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 03:02:31.077552 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077575 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077595 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077615 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 03:02:31.077635 | orchestrator | 2026-02-14 03:02:31.077656 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 03:02:31.077677 | orchestrator | Saturday 14 February 2026 03:02:27 +0000 (0:00:02.736) 0:00:04.305 ***** 2026-02-14 03:02:31.077697 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:02:31.077718 | orchestrator | 2026-02-14 03:02:31.077732 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-14 03:02:31.077752 | orchestrator | Saturday 14 February 2026 03:02:28 +0000 (0:00:01.334) 0:00:05.640 ***** 2026-02-14 03:02:31.077768 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:31.077904 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:31.077916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:31.077945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.211865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.211975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212138 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:32.212344 | orchestrator | 2026-02-14 03:02:32.212367 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-14 03:02:32.212388 | orchestrator | Saturday 14 February 2026 03:02:31 +0000 (0:00:03.491) 0:00:09.131 ***** 2026-02-14 03:02:32.212411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.212431 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.212449 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.212467 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:02:32.212487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.212530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.823706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.823833 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:02:32.823896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.823913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.823925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.823937 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:02:32.823949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.823965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.823977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.824065 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:02:32.824096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.824117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.824129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.824141 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:02:32.824152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.824164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.824175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:32.824186 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:02:32.824199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:32.824217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704159 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:02:33.704173 | orchestrator | 2026-02-14 03:02:33.704183 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-14 03:02:33.704194 | orchestrator | Saturday 14 February 2026 03:02:32 +0000 (0:00:00.913) 0:00:10.045 ***** 2026-02-14 03:02:33.704205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:33.704217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704236 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:02:33.704263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:33.704277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704313 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:02:33.704345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:33.704356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:33.704385 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:02:33.704394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:33.704422 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:02:33.704432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:33.704456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556807 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:02:38.556828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:38.556843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556867 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:02:38.556879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 03:02:38.556916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:38.556940 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:02:38.556951 | orchestrator | 2026-02-14 03:02:38.556963 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-14 03:02:38.556976 | orchestrator | Saturday 14 February 2026 03:02:34 +0000 (0:00:01.754) 0:00:11.800 ***** 2026-02-14 03:02:38.556987 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:02:38.557032 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:02:38.557044 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:02:38.557055 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:02:38.557082 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:02:38.557093 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:02:38.557104 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:02:38.557115 | orchestrator | 2026-02-14 03:02:38.557127 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-14 03:02:38.557138 | orchestrator | Saturday 14 February 2026 03:02:35 +0000 (0:00:00.705) 0:00:12.505 ***** 2026-02-14 03:02:38.557149 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:02:38.557159 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:02:38.557171 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:02:38.557181 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:02:38.557193 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:02:38.557204 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:02:38.557215 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:02:38.557227 | orchestrator | 2026-02-14 03:02:38.557240 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-14 03:02:38.557253 | orchestrator | Saturday 14 February 2026 03:02:36 +0000 (0:00:00.827) 0:00:13.333 ***** 2026-02-14 03:02:38.557267 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:38.557414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:41.294666 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294931 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294945 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.294988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.295088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.295103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.295114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:41.295126 | orchestrator | 2026-02-14 03:02:41.295139 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-14 03:02:41.295151 | orchestrator | Saturday 14 February 2026 03:02:39 +0000 (0:00:03.302) 0:00:16.635 ***** 2026-02-14 03:02:41.295162 | orchestrator | [WARNING]: Skipped 2026-02-14 03:02:41.295174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-14 03:02:41.295187 | orchestrator | to this access issue: 2026-02-14 03:02:41.295201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-14 03:02:41.295214 | orchestrator | directory 2026-02-14 03:02:41.295226 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 03:02:41.295240 | orchestrator | 2026-02-14 03:02:41.295252 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-14 03:02:41.295264 | orchestrator | Saturday 14 February 2026 03:02:40 +0000 (0:00:00.952) 0:00:17.587 ***** 2026-02-14 03:02:41.295276 | orchestrator | [WARNING]: Skipped 2026-02-14 03:02:41.295296 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-14 03:02:50.887302 | orchestrator | to this access issue: 2026-02-14 03:02:50.887421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-14 03:02:50.887438 | orchestrator | directory 2026-02-14 03:02:50.887452 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 03:02:50.887465 | orchestrator | 2026-02-14 03:02:50.887477 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-14 03:02:50.887490 | orchestrator | Saturday 14 February 2026 03:02:41 +0000 (0:00:01.204) 0:00:18.792 ***** 2026-02-14 03:02:50.887524 | orchestrator | [WARNING]: Skipped 2026-02-14 03:02:50.887537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-14 03:02:50.887548 | orchestrator | to this access issue: 2026-02-14 03:02:50.887559 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-14 03:02:50.887570 | orchestrator | directory 2026-02-14 03:02:50.887582 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 03:02:50.887593 | orchestrator | 2026-02-14 03:02:50.887604 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-14 03:02:50.887615 | orchestrator | Saturday 14 February 2026 03:02:42 +0000 (0:00:00.873) 0:00:19.665 ***** 2026-02-14 03:02:50.887626 | orchestrator | [WARNING]: Skipped 2026-02-14 03:02:50.887637 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-14 03:02:50.887648 | orchestrator | to this access issue: 2026-02-14 03:02:50.887659 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-14 03:02:50.887670 | orchestrator | directory 2026-02-14 03:02:50.887681 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 03:02:50.887692 | orchestrator | 2026-02-14 03:02:50.887703 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-14 03:02:50.887714 | orchestrator | Saturday 14 February 2026 03:02:43 +0000 (0:00:00.822) 0:00:20.488 ***** 2026-02-14 03:02:50.887725 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:50.887737 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:02:50.887748 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:02:50.887759 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:02:50.887770 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:02:50.887781 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:02:50.887810 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:02:50.887822 | orchestrator | 2026-02-14 03:02:50.887836 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-14 03:02:50.887849 | orchestrator | Saturday 14 February 2026 03:02:45 +0000 (0:00:02.507) 0:00:22.995 ***** 2026-02-14 03:02:50.887862 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887876 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887889 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887914 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887926 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887945 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 03:02:50.887959 | orchestrator | 2026-02-14 03:02:50.887987 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-14 03:02:50.888000 | orchestrator | Saturday 14 February 2026 03:02:47 +0000 (0:00:01.935) 0:00:24.931 ***** 2026-02-14 03:02:50.888014 | orchestrator | changed: [testbed-manager] 2026-02-14 03:02:50.888053 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:02:50.888066 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:02:50.888079 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:02:50.888091 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:02:50.888104 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:02:50.888114 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:02:50.888125 | orchestrator | 2026-02-14 03:02:50.888136 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-14 03:02:50.888155 | orchestrator | Saturday 14 February 2026 03:02:49 +0000 (0:00:01.890) 0:00:26.822 ***** 2026-02-14 03:02:50.888169 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:50.888203 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:50.888216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:50.888227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:50.888239 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:50.888256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:50.888268 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:50.888285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:50.888307 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:50.888328 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:56.734013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:56.734215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:56.734263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:56.734308 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734329 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734346 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:56.734384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:02:56.734505 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734517 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734531 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:56.734544 | orchestrator | 2026-02-14 03:02:56.734556 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-14 03:02:56.734569 | orchestrator | Saturday 14 February 2026 03:02:51 +0000 (0:00:01.565) 0:00:28.387 ***** 2026-02-14 03:02:56.734580 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734592 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734613 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734642 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734660 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734676 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 03:02:56.734693 | orchestrator | 2026-02-14 03:02:56.734710 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-14 03:02:56.734727 | orchestrator | Saturday 14 February 2026 03:02:53 +0000 (0:00:01.886) 0:00:30.274 ***** 2026-02-14 03:02:56.734746 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734770 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734791 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734802 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734813 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734825 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 03:02:56.734836 | orchestrator | 2026-02-14 03:02:56.734848 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-14 03:02:56.734859 | orchestrator | Saturday 14 February 2026 03:02:54 +0000 (0:00:01.689) 0:00:31.964 ***** 2026-02-14 03:02:56.734871 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:56.734896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355488 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 03:02:57.355542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355627 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:02:57.355659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:04:19.264535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:04:19.264632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:04:19.264639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:04:19.264652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:04:19.264656 | orchestrator | 2026-02-14 03:04:19.264661 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-14 03:04:19.264667 | orchestrator | Saturday 14 February 2026 03:02:57 +0000 (0:00:02.614) 0:00:34.578 ***** 2026-02-14 03:04:19.264671 | orchestrator | changed: [testbed-manager] 2026-02-14 03:04:19.264676 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:19.264680 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:19.264684 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:19.264688 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:04:19.264692 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:04:19.264696 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:04:19.264700 | orchestrator | 2026-02-14 03:04:19.264703 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-14 03:04:19.264707 | orchestrator | Saturday 14 February 2026 03:02:58 +0000 (0:00:01.401) 0:00:35.979 ***** 2026-02-14 03:04:19.264711 | orchestrator | changed: [testbed-manager] 2026-02-14 03:04:19.264715 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:19.264719 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:19.264722 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:19.264726 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:04:19.264730 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:04:19.264734 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:04:19.264737 | orchestrator | 2026-02-14 03:04:19.264741 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264745 | orchestrator | Saturday 14 February 2026 03:02:59 +0000 (0:00:01.118) 0:00:37.098 ***** 2026-02-14 03:04:19.264749 | orchestrator | 2026-02-14 03:04:19.264753 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264757 | orchestrator | Saturday 14 February 2026 03:02:59 +0000 (0:00:00.063) 0:00:37.162 ***** 2026-02-14 03:04:19.264760 | orchestrator | 2026-02-14 03:04:19.264764 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264768 | orchestrator | Saturday 14 February 2026 03:02:59 +0000 (0:00:00.064) 0:00:37.226 ***** 2026-02-14 03:04:19.264772 | orchestrator | 2026-02-14 03:04:19.264776 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264779 | orchestrator | Saturday 14 February 2026 03:03:00 +0000 (0:00:00.062) 0:00:37.289 ***** 2026-02-14 03:04:19.264783 | orchestrator | 2026-02-14 03:04:19.264787 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264794 | orchestrator | Saturday 14 February 2026 03:03:00 +0000 (0:00:00.215) 0:00:37.505 ***** 2026-02-14 03:04:19.264798 | orchestrator | 2026-02-14 03:04:19.264802 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264806 | orchestrator | Saturday 14 February 2026 03:03:00 +0000 (0:00:00.062) 0:00:37.567 ***** 2026-02-14 03:04:19.264809 | orchestrator | 2026-02-14 03:04:19.264814 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 03:04:19.264817 | orchestrator | Saturday 14 February 2026 03:03:00 +0000 (0:00:00.060) 0:00:37.628 ***** 2026-02-14 03:04:19.264821 | orchestrator | 2026-02-14 03:04:19.264825 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-14 03:04:19.264829 | orchestrator | Saturday 14 February 2026 03:03:00 +0000 (0:00:00.087) 0:00:37.715 ***** 2026-02-14 03:04:19.264833 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:19.264836 | orchestrator | changed: [testbed-manager] 2026-02-14 03:04:19.264840 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:19.264844 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:04:19.264848 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:04:19.264860 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:04:19.264864 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:19.264868 | orchestrator | 2026-02-14 03:04:19.264872 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-14 03:04:19.264876 | orchestrator | Saturday 14 February 2026 03:03:37 +0000 (0:00:37.099) 0:01:14.815 ***** 2026-02-14 03:04:19.264880 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:19.264884 | orchestrator | changed: [testbed-manager] 2026-02-14 03:04:19.264887 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:04:19.264891 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:04:19.264895 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:04:19.264899 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:19.264902 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:19.264906 | orchestrator | 2026-02-14 03:04:19.264910 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-14 03:04:19.264914 | orchestrator | Saturday 14 February 2026 03:04:09 +0000 (0:00:32.032) 0:01:46.847 ***** 2026-02-14 03:04:19.264918 | orchestrator | ok: [testbed-manager] 2026-02-14 03:04:19.264922 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:04:19.264926 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:04:19.264930 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:04:19.264934 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:04:19.264937 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:04:19.264941 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:04:19.264945 | orchestrator | 2026-02-14 03:04:19.264949 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-14 03:04:19.264953 | orchestrator | Saturday 14 February 2026 03:04:11 +0000 (0:00:01.795) 0:01:48.642 ***** 2026-02-14 03:04:19.264957 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:19.264960 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:19.264964 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:19.264968 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:04:19.264972 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:04:19.264975 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:04:19.264979 | orchestrator | changed: [testbed-manager] 2026-02-14 03:04:19.264983 | orchestrator | 2026-02-14 03:04:19.264987 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:04:19.264992 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.264997 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265006 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265013 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265017 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265021 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265024 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 03:04:19.265028 | orchestrator | 2026-02-14 03:04:19.265032 | orchestrator | 2026-02-14 03:04:19.265036 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:04:19.265040 | orchestrator | Saturday 14 February 2026 03:04:19 +0000 (0:00:07.819) 0:01:56.462 ***** 2026-02-14 03:04:19.265044 | orchestrator | =============================================================================== 2026-02-14 03:04:19.265047 | orchestrator | common : Restart fluentd container ------------------------------------- 37.10s 2026-02-14 03:04:19.265051 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.03s 2026-02-14 03:04:19.265055 | orchestrator | common : Restart cron container ----------------------------------------- 7.82s 2026-02-14 03:04:19.265059 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.49s 2026-02-14 03:04:19.265063 | orchestrator | common : Copying over config.json files for services -------------------- 3.30s 2026-02-14 03:04:19.265066 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.74s 2026-02-14 03:04:19.265070 | orchestrator | common : Check common containers ---------------------------------------- 2.61s 2026-02-14 03:04:19.265074 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.51s 2026-02-14 03:04:19.265078 | orchestrator | common : Copying over cron logrotate config file ------------------------ 1.94s 2026-02-14 03:04:19.265081 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.89s 2026-02-14 03:04:19.265085 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.89s 2026-02-14 03:04:19.265089 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.80s 2026-02-14 03:04:19.265093 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.75s 2026-02-14 03:04:19.265096 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.69s 2026-02-14 03:04:19.265100 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.57s 2026-02-14 03:04:19.265104 | orchestrator | common : Creating log volume -------------------------------------------- 1.40s 2026-02-14 03:04:19.265111 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2026-02-14 03:04:19.670699 | orchestrator | common : include_tasks -------------------------------------------------- 1.30s 2026-02-14 03:04:19.670799 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.20s 2026-02-14 03:04:19.670816 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.12s 2026-02-14 03:04:21.955938 | orchestrator | 2026-02-14 03:04:21 | INFO  | Task a8da5568-0fa6-4002-b531-f72345561205 (loadbalancer) was prepared for execution. 2026-02-14 03:04:21.956035 | orchestrator | 2026-02-14 03:04:21 | INFO  | It takes a moment until task a8da5568-0fa6-4002-b531-f72345561205 (loadbalancer) has been started and output is visible here. 2026-02-14 03:04:36.474846 | orchestrator | 2026-02-14 03:04:36.474951 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:04:36.474966 | orchestrator | 2026-02-14 03:04:36.474978 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:04:36.474989 | orchestrator | Saturday 14 February 2026 03:04:26 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-02-14 03:04:36.475021 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:04:36.475034 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:04:36.475044 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:04:36.475054 | orchestrator | 2026-02-14 03:04:36.475064 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:04:36.475073 | orchestrator | Saturday 14 February 2026 03:04:26 +0000 (0:00:00.295) 0:00:00.549 ***** 2026-02-14 03:04:36.475084 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-14 03:04:36.475094 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-14 03:04:36.475104 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-14 03:04:36.475114 | orchestrator | 2026-02-14 03:04:36.475124 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-14 03:04:36.475133 | orchestrator | 2026-02-14 03:04:36.475144 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-14 03:04:36.475166 | orchestrator | Saturday 14 February 2026 03:04:26 +0000 (0:00:00.462) 0:00:01.012 ***** 2026-02-14 03:04:36.475176 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:04:36.475186 | orchestrator | 2026-02-14 03:04:36.475196 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-14 03:04:36.475256 | orchestrator | Saturday 14 February 2026 03:04:27 +0000 (0:00:00.568) 0:00:01.580 ***** 2026-02-14 03:04:36.475267 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:04:36.475276 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:04:36.475286 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:04:36.475295 | orchestrator | 2026-02-14 03:04:36.475305 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-14 03:04:36.475315 | orchestrator | Saturday 14 February 2026 03:04:27 +0000 (0:00:00.571) 0:00:02.151 ***** 2026-02-14 03:04:36.475324 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:04:36.475334 | orchestrator | 2026-02-14 03:04:36.475344 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-14 03:04:36.475353 | orchestrator | Saturday 14 February 2026 03:04:28 +0000 (0:00:00.666) 0:00:02.818 ***** 2026-02-14 03:04:36.475363 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:04:36.475372 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:04:36.475382 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:04:36.475393 | orchestrator | 2026-02-14 03:04:36.475406 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-14 03:04:36.475417 | orchestrator | Saturday 14 February 2026 03:04:29 +0000 (0:00:00.580) 0:00:03.398 ***** 2026-02-14 03:04:36.475428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475462 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475473 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475484 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 03:04:36.475495 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 03:04:36.475508 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 03:04:36.475519 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 03:04:36.475530 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 03:04:36.475549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 03:04:36.475560 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 03:04:36.475571 | orchestrator | 2026-02-14 03:04:36.475582 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-14 03:04:36.475594 | orchestrator | Saturday 14 February 2026 03:04:32 +0000 (0:00:03.055) 0:00:06.454 ***** 2026-02-14 03:04:36.475605 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-14 03:04:36.475616 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-14 03:04:36.475627 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-14 03:04:36.475638 | orchestrator | 2026-02-14 03:04:36.475650 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-14 03:04:36.475661 | orchestrator | Saturday 14 February 2026 03:04:32 +0000 (0:00:00.670) 0:00:07.124 ***** 2026-02-14 03:04:36.475672 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-14 03:04:36.475684 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-14 03:04:36.475695 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-14 03:04:36.475706 | orchestrator | 2026-02-14 03:04:36.475717 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-14 03:04:36.475728 | orchestrator | Saturday 14 February 2026 03:04:34 +0000 (0:00:01.239) 0:00:08.364 ***** 2026-02-14 03:04:36.475738 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-14 03:04:36.475747 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:04:36.475773 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-14 03:04:36.475783 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:04:36.475793 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-14 03:04:36.475802 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:04:36.475812 | orchestrator | 2026-02-14 03:04:36.475821 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-14 03:04:36.475831 | orchestrator | Saturday 14 February 2026 03:04:34 +0000 (0:00:00.513) 0:00:08.878 ***** 2026-02-14 03:04:36.475848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:36.475864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:36.475875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:36.475892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:36.475903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:36.475920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:41.501403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:41.501558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:41.501580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:41.501594 | orchestrator | 2026-02-14 03:04:41.501608 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-14 03:04:41.501622 | orchestrator | Saturday 14 February 2026 03:04:36 +0000 (0:00:01.767) 0:00:10.646 ***** 2026-02-14 03:04:41.501634 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:41.501681 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:41.501694 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:41.501705 | orchestrator | 2026-02-14 03:04:41.501717 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-14 03:04:41.501728 | orchestrator | Saturday 14 February 2026 03:04:37 +0000 (0:00:00.878) 0:00:11.525 ***** 2026-02-14 03:04:41.501740 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-14 03:04:41.501753 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-14 03:04:41.501765 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-14 03:04:41.501776 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-14 03:04:41.501787 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-14 03:04:41.501797 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-14 03:04:41.501808 | orchestrator | 2026-02-14 03:04:41.501819 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-14 03:04:41.501830 | orchestrator | Saturday 14 February 2026 03:04:38 +0000 (0:00:01.408) 0:00:12.933 ***** 2026-02-14 03:04:41.501840 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:04:41.501851 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:04:41.501862 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:04:41.501874 | orchestrator | 2026-02-14 03:04:41.501885 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-14 03:04:41.501895 | orchestrator | Saturday 14 February 2026 03:04:39 +0000 (0:00:00.843) 0:00:13.777 ***** 2026-02-14 03:04:41.501906 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:04:41.501918 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:04:41.501929 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:04:41.501940 | orchestrator | 2026-02-14 03:04:41.501951 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-14 03:04:41.501962 | orchestrator | Saturday 14 February 2026 03:04:40 +0000 (0:00:01.304) 0:00:15.081 ***** 2026-02-14 03:04:41.501976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:04:41.502087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:04:41.502110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:41.502124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:41.502150 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:04:41.502161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:04:41.502247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:04:41.502262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:41.502274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:41.502285 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:04:41.502313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:04:44.207530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:04:44.207676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:44.207693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:44.207706 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:04:44.207720 | orchestrator | 2026-02-14 03:04:44.207732 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-14 03:04:44.207744 | orchestrator | Saturday 14 February 2026 03:04:41 +0000 (0:00:00.598) 0:00:15.679 ***** 2026-02-14 03:04:44.207756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:44.207769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:44.207780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:44.207834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:44.207847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:44.207859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:44.207871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:44.207882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:44.207894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:44.207946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:04:52.219754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463', '__omit_place_holder__f2c4f0547bed1541aa4d79533e4d9a2fa11b8463'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 03:04:52.219766 | orchestrator | 2026-02-14 03:04:52.219775 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-14 03:04:52.219783 | orchestrator | Saturday 14 February 2026 03:04:44 +0000 (0:00:02.704) 0:00:18.384 ***** 2026-02-14 03:04:52.219791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:04:52.219883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:52.219891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:52.219898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:04:52.219905 | orchestrator | 2026-02-14 03:04:52.219912 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-14 03:04:52.219919 | orchestrator | Saturday 14 February 2026 03:04:47 +0000 (0:00:03.030) 0:00:21.414 ***** 2026-02-14 03:04:52.219933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 03:04:52.219941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 03:04:52.219948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 03:04:52.219955 | orchestrator | 2026-02-14 03:04:52.219962 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-14 03:04:52.219968 | orchestrator | Saturday 14 February 2026 03:04:49 +0000 (0:00:01.787) 0:00:23.201 ***** 2026-02-14 03:04:52.219975 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 03:04:52.219982 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 03:04:52.219989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 03:04:52.219996 | orchestrator | 2026-02-14 03:04:52.220002 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-14 03:04:52.220009 | orchestrator | Saturday 14 February 2026 03:04:51 +0000 (0:00:02.663) 0:00:25.865 ***** 2026-02-14 03:04:52.220016 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:04:52.220025 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:04:52.220031 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:04:52.220039 | orchestrator | 2026-02-14 03:04:52.220051 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-14 03:05:03.434797 | orchestrator | Saturday 14 February 2026 03:04:52 +0000 (0:00:00.532) 0:00:26.397 ***** 2026-02-14 03:05:03.434875 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 03:05:03.434890 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 03:05:03.434894 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 03:05:03.434899 | orchestrator | 2026-02-14 03:05:03.434904 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-14 03:05:03.434908 | orchestrator | Saturday 14 February 2026 03:04:54 +0000 (0:00:01.970) 0:00:28.367 ***** 2026-02-14 03:05:03.434913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 03:05:03.434918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 03:05:03.434922 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 03:05:03.434926 | orchestrator | 2026-02-14 03:05:03.434930 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-14 03:05:03.434934 | orchestrator | Saturday 14 February 2026 03:04:56 +0000 (0:00:02.064) 0:00:30.431 ***** 2026-02-14 03:05:03.434939 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-14 03:05:03.434943 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-14 03:05:03.434947 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-14 03:05:03.434951 | orchestrator | 2026-02-14 03:05:03.434964 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-14 03:05:03.434968 | orchestrator | Saturday 14 February 2026 03:04:57 +0000 (0:00:01.389) 0:00:31.821 ***** 2026-02-14 03:05:03.434973 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-14 03:05:03.434977 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-14 03:05:03.434981 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-14 03:05:03.434985 | orchestrator | 2026-02-14 03:05:03.435002 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-14 03:05:03.435006 | orchestrator | Saturday 14 February 2026 03:04:59 +0000 (0:00:01.415) 0:00:33.237 ***** 2026-02-14 03:05:03.435011 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:03.435015 | orchestrator | 2026-02-14 03:05:03.435019 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-14 03:05:03.435022 | orchestrator | Saturday 14 February 2026 03:04:59 +0000 (0:00:00.539) 0:00:33.776 ***** 2026-02-14 03:05:03.435028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:03.435078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:03.435083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:03.435087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:03.435091 | orchestrator | 2026-02-14 03:05:03.435095 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-14 03:05:03.435099 | orchestrator | Saturday 14 February 2026 03:05:02 +0000 (0:00:03.286) 0:00:37.062 ***** 2026-02-14 03:05:03.435110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:04.224628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:04.224714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:04.224744 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:04.224755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:04.224763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:04.224784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:04.224792 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:04.224807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:04.224842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:04.224852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:04.224865 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:04.224873 | orchestrator | 2026-02-14 03:05:04.224881 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-14 03:05:04.224889 | orchestrator | Saturday 14 February 2026 03:05:03 +0000 (0:00:00.553) 0:00:37.616 ***** 2026-02-14 03:05:04.224898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:04.224906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:04.224914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:04.224921 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:04.224929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:04.224945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:05.075354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:05.075482 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:05.075500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:05.075515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:05.075527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:05.075538 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:05.075550 | orchestrator | 2026-02-14 03:05:05.075562 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-14 03:05:05.075575 | orchestrator | Saturday 14 February 2026 03:05:04 +0000 (0:00:00.788) 0:00:38.404 ***** 2026-02-14 03:05:05.075586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:05.075599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:05.075629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:05.075650 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:05.075662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:05.075674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:05.075686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:05.075697 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:05.075709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:05.075737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:05.075754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:05.075782 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:06.433746 | orchestrator | 2026-02-14 03:05:06.433856 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-14 03:05:06.433874 | orchestrator | Saturday 14 February 2026 03:05:05 +0000 (0:00:00.846) 0:00:39.251 ***** 2026-02-14 03:05:06.433890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:06.433906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:06.433918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:06.433931 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:06.433943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:06.433955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:06.433984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:06.434087 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:06.434123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:06.434136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:06.434148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:06.434161 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:06.434189 | orchestrator | 2026-02-14 03:05:06.434208 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-14 03:05:06.434227 | orchestrator | Saturday 14 February 2026 03:05:05 +0000 (0:00:00.577) 0:00:39.829 ***** 2026-02-14 03:05:06.434245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:06.434321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:06.434377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:06.434400 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:06.434436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:07.429109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:07.429211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:07.429242 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:07.429303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:07.429316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:07.429329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:07.429364 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:07.429376 | orchestrator | 2026-02-14 03:05:07.429388 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-14 03:05:07.429400 | orchestrator | Saturday 14 February 2026 03:05:06 +0000 (0:00:00.787) 0:00:40.616 ***** 2026-02-14 03:05:07.429427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:07.429459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:07.429471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:07.429483 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:07.429494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:07.429506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:07.429525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:07.429536 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:07.429553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:07.429572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:08.737600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:08.737693 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:08.737707 | orchestrator | 2026-02-14 03:05:08.737717 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-14 03:05:08.737727 | orchestrator | Saturday 14 February 2026 03:05:07 +0000 (0:00:00.989) 0:00:41.606 ***** 2026-02-14 03:05:08.737737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:08.737748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:08.737775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:08.737784 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:08.737793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:08.737816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:08.737838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:08.737847 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:08.737856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:08.737865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:08.737879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:08.737887 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:08.737895 | orchestrator | 2026-02-14 03:05:08.737904 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-14 03:05:08.737912 | orchestrator | Saturday 14 February 2026 03:05:07 +0000 (0:00:00.572) 0:00:42.179 ***** 2026-02-14 03:05:08.737920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 03:05:08.737929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:08.737950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:15.082991 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:15.083098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 03:05:15.083117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:15.083156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:15.083169 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:15.083181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 03:05:15.083208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 03:05:15.083220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 03:05:15.083232 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:15.083244 | orchestrator | 2026-02-14 03:05:15.083256 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-14 03:05:15.083316 | orchestrator | Saturday 14 February 2026 03:05:08 +0000 (0:00:00.742) 0:00:42.921 ***** 2026-02-14 03:05:15.083330 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 03:05:15.083359 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 03:05:15.083370 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 03:05:15.083381 | orchestrator | 2026-02-14 03:05:15.083392 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-14 03:05:15.083404 | orchestrator | Saturday 14 February 2026 03:05:10 +0000 (0:00:01.593) 0:00:44.514 ***** 2026-02-14 03:05:15.083416 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 03:05:15.083427 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 03:05:15.083438 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 03:05:15.083449 | orchestrator | 2026-02-14 03:05:15.083469 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-14 03:05:15.083480 | orchestrator | Saturday 14 February 2026 03:05:12 +0000 (0:00:01.695) 0:00:46.210 ***** 2026-02-14 03:05:15.083491 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:05:15.083502 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:05:15.083513 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:05:15.083524 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:15.083535 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:05:15.083546 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:05:15.083557 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:15.083568 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:05:15.083579 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:15.083590 | orchestrator | 2026-02-14 03:05:15.083601 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-14 03:05:15.083612 | orchestrator | Saturday 14 February 2026 03:05:12 +0000 (0:00:00.856) 0:00:47.067 ***** 2026-02-14 03:05:15.083623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:15.083636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:15.083653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 03:05:15.083674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:19.154974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:19.155060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 03:05:19.155068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:19.155073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:19.155111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 03:05:19.155117 | orchestrator | 2026-02-14 03:05:19.155134 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-14 03:05:19.155139 | orchestrator | Saturday 14 February 2026 03:05:15 +0000 (0:00:02.197) 0:00:49.264 ***** 2026-02-14 03:05:19.155144 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:19.155148 | orchestrator | 2026-02-14 03:05:19.155152 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-14 03:05:19.155156 | orchestrator | Saturday 14 February 2026 03:05:15 +0000 (0:00:00.760) 0:00:50.024 ***** 2026-02-14 03:05:19.155171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 03:05:19.155190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:19.155195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.155199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.155203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 03:05:19.155210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:19.155214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.155226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 03:05:19.786480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:19.786493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786532 | orchestrator | 2026-02-14 03:05:19.786543 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-14 03:05:19.786554 | orchestrator | Saturday 14 February 2026 03:05:19 +0000 (0:00:03.308) 0:00:53.332 ***** 2026-02-14 03:05:19.786565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 03:05:19.786612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:19.786630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786661 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:19.786678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 03:05:19.786699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:19.786725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:19.786757 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:19.786784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 03:05:27.985334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 03:05:27.985450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:27.985468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 03:05:27.985506 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:27.985521 | orchestrator | 2026-02-14 03:05:27.985534 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-14 03:05:27.985546 | orchestrator | Saturday 14 February 2026 03:05:19 +0000 (0:00:00.638) 0:00:53.970 ***** 2026-02-14 03:05:27.985558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985585 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:27.985613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985636 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:27.985647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-14 03:05:27.985669 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:27.985680 | orchestrator | 2026-02-14 03:05:27.985691 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-14 03:05:27.985702 | orchestrator | Saturday 14 February 2026 03:05:20 +0000 (0:00:01.075) 0:00:55.046 ***** 2026-02-14 03:05:27.985713 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:27.985724 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:27.985734 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:27.985745 | orchestrator | 2026-02-14 03:05:27.985757 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-14 03:05:27.985768 | orchestrator | Saturday 14 February 2026 03:05:22 +0000 (0:00:01.253) 0:00:56.300 ***** 2026-02-14 03:05:27.985779 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:27.985790 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:27.985801 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:27.985811 | orchestrator | 2026-02-14 03:05:27.985822 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-14 03:05:27.985833 | orchestrator | Saturday 14 February 2026 03:05:24 +0000 (0:00:01.929) 0:00:58.229 ***** 2026-02-14 03:05:27.985844 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:27.985855 | orchestrator | 2026-02-14 03:05:27.985884 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-14 03:05:27.985896 | orchestrator | Saturday 14 February 2026 03:05:24 +0000 (0:00:00.637) 0:00:58.867 ***** 2026-02-14 03:05:27.985909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:27.985937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:27.985951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:27.985963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:27.985975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:27.985995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:28.585336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585372 | orchestrator | 2026-02-14 03:05:28.585385 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-14 03:05:28.585398 | orchestrator | Saturday 14 February 2026 03:05:27 +0000 (0:00:03.294) 0:01:02.162 ***** 2026-02-14 03:05:28.585411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 03:05:28.585423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585492 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:28.585511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 03:05:28.585523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:28.585545 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:28.585557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 03:05:28.585584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 03:05:37.744577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:37.744692 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:37.744711 | orchestrator | 2026-02-14 03:05:37.744724 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-14 03:05:37.744737 | orchestrator | Saturday 14 February 2026 03:05:28 +0000 (0:00:00.603) 0:01:02.765 ***** 2026-02-14 03:05:37.744765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744794 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:37.744805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744828 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:37.744839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-14 03:05:37.744862 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:37.744873 | orchestrator | 2026-02-14 03:05:37.744884 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-14 03:05:37.744895 | orchestrator | Saturday 14 February 2026 03:05:29 +0000 (0:00:00.817) 0:01:03.583 ***** 2026-02-14 03:05:37.744906 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:37.744918 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:37.744929 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:37.744940 | orchestrator | 2026-02-14 03:05:37.744952 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-14 03:05:37.744963 | orchestrator | Saturday 14 February 2026 03:05:30 +0000 (0:00:01.475) 0:01:05.058 ***** 2026-02-14 03:05:37.744998 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:37.745009 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:37.745020 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:37.745031 | orchestrator | 2026-02-14 03:05:37.745042 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-14 03:05:37.745053 | orchestrator | Saturday 14 February 2026 03:05:32 +0000 (0:00:01.904) 0:01:06.963 ***** 2026-02-14 03:05:37.745064 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:37.745075 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:37.745086 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:37.745100 | orchestrator | 2026-02-14 03:05:37.745113 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-14 03:05:37.745127 | orchestrator | Saturday 14 February 2026 03:05:33 +0000 (0:00:00.316) 0:01:07.279 ***** 2026-02-14 03:05:37.745140 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:37.745152 | orchestrator | 2026-02-14 03:05:37.745165 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-14 03:05:37.745178 | orchestrator | Saturday 14 February 2026 03:05:33 +0000 (0:00:00.655) 0:01:07.935 ***** 2026-02-14 03:05:37.745213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 03:05:37.745235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 03:05:37.745250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 03:05:37.745264 | orchestrator | 2026-02-14 03:05:37.745277 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-14 03:05:37.745290 | orchestrator | Saturday 14 February 2026 03:05:36 +0000 (0:00:02.654) 0:01:10.589 ***** 2026-02-14 03:05:37.745356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 03:05:37.745371 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:37.745384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 03:05:37.745397 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:37.745419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 03:05:45.192513 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:45.192623 | orchestrator | 2026-02-14 03:05:45.192641 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-14 03:05:45.192654 | orchestrator | Saturday 14 February 2026 03:05:37 +0000 (0:00:01.336) 0:01:11.926 ***** 2026-02-14 03:05:45.192686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192714 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:45.192726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192772 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:45.192783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 03:05:45.192806 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:45.192817 | orchestrator | 2026-02-14 03:05:45.192828 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-14 03:05:45.192840 | orchestrator | Saturday 14 February 2026 03:05:39 +0000 (0:00:01.633) 0:01:13.559 ***** 2026-02-14 03:05:45.192851 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:45.192862 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:45.192873 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:45.192884 | orchestrator | 2026-02-14 03:05:45.192899 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-14 03:05:45.192911 | orchestrator | Saturday 14 February 2026 03:05:39 +0000 (0:00:00.432) 0:01:13.992 ***** 2026-02-14 03:05:45.192922 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:45.192933 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:45.192944 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:45.192955 | orchestrator | 2026-02-14 03:05:45.192966 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-14 03:05:45.192977 | orchestrator | Saturday 14 February 2026 03:05:41 +0000 (0:00:01.350) 0:01:15.343 ***** 2026-02-14 03:05:45.192989 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:45.193000 | orchestrator | 2026-02-14 03:05:45.193011 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-14 03:05:45.193022 | orchestrator | Saturday 14 February 2026 03:05:42 +0000 (0:00:00.891) 0:01:16.235 ***** 2026-02-14 03:05:45.193058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:45.193099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.193114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.193128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.193142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:45.193163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 03:05:45.869874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.869979 | orchestrator | 2026-02-14 03:05:45.869993 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-14 03:05:45.870005 | orchestrator | Saturday 14 February 2026 03:05:45 +0000 (0:00:03.230) 0:01:19.465 ***** 2026-02-14 03:05:45.870086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 03:05:45.870099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.870111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.870123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:45.870138 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:45.870171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 03:05:51.798291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798494 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:51.798509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 03:05:51.798522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 03:05:51.798617 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:51.798628 | orchestrator | 2026-02-14 03:05:51.798641 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-14 03:05:51.798654 | orchestrator | Saturday 14 February 2026 03:05:45 +0000 (0:00:00.687) 0:01:20.152 ***** 2026-02-14 03:05:51.798666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798692 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:51.798704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798726 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:51.798737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-14 03:05:51.798760 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:51.798771 | orchestrator | 2026-02-14 03:05:51.798782 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-14 03:05:51.798792 | orchestrator | Saturday 14 February 2026 03:05:47 +0000 (0:00:01.099) 0:01:21.252 ***** 2026-02-14 03:05:51.798804 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:51.798826 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:51.798839 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:51.798852 | orchestrator | 2026-02-14 03:05:51.798865 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-14 03:05:51.798879 | orchestrator | Saturday 14 February 2026 03:05:48 +0000 (0:00:01.288) 0:01:22.541 ***** 2026-02-14 03:05:51.798930 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:05:51.798944 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:05:51.798957 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:05:51.798969 | orchestrator | 2026-02-14 03:05:51.798982 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-14 03:05:51.798995 | orchestrator | Saturday 14 February 2026 03:05:50 +0000 (0:00:01.911) 0:01:24.452 ***** 2026-02-14 03:05:51.799008 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:51.799020 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:51.799033 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:51.799044 | orchestrator | 2026-02-14 03:05:51.799057 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-14 03:05:51.799070 | orchestrator | Saturday 14 February 2026 03:05:50 +0000 (0:00:00.319) 0:01:24.771 ***** 2026-02-14 03:05:51.799084 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:51.799096 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:51.799107 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:05:51.799118 | orchestrator | 2026-02-14 03:05:51.799129 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-14 03:05:51.799140 | orchestrator | Saturday 14 February 2026 03:05:50 +0000 (0:00:00.302) 0:01:25.074 ***** 2026-02-14 03:05:51.799151 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:05:51.799162 | orchestrator | 2026-02-14 03:05:51.799173 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-14 03:05:51.799190 | orchestrator | Saturday 14 February 2026 03:05:51 +0000 (0:00:00.904) 0:01:25.978 ***** 2026-02-14 03:05:55.022712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 03:05:55.022813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:55.022829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 03:05:55.022931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:55.022970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.022980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.023001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 03:05:55.833307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:55.833512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833756 | orchestrator | 2026-02-14 03:05:55.833779 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-14 03:05:55.833800 | orchestrator | Saturday 14 February 2026 03:05:55 +0000 (0:00:03.436) 0:01:29.415 ***** 2026-02-14 03:05:55.833822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 03:05:55.833845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:55.833866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:05:55.833899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.302251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.302384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.302432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.302447 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:05:56.302463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 03:05:56.302477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:56.303010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.303052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.303067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.303092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.303112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:05:56.303125 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:05:56.303139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 03:05:56.303151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 03:05:56.303170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 03:06:05.769607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 03:06:05.769706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 03:06:05.769733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:06:05.769744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 03:06:05.769753 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:05.769764 | orchestrator | 2026-02-14 03:06:05.769773 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-14 03:06:05.769783 | orchestrator | Saturday 14 February 2026 03:05:56 +0000 (0:00:01.066) 0:01:30.481 ***** 2026-02-14 03:06:05.769792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769812 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:05.769820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769836 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:05.769844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-14 03:06:05.769879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:05.769887 | orchestrator | 2026-02-14 03:06:05.769895 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-14 03:06:05.769918 | orchestrator | Saturday 14 February 2026 03:05:57 +0000 (0:00:01.215) 0:01:31.697 ***** 2026-02-14 03:06:05.769927 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:05.769935 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:05.769943 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:05.769951 | orchestrator | 2026-02-14 03:06:05.769960 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-14 03:06:05.769968 | orchestrator | Saturday 14 February 2026 03:05:58 +0000 (0:00:01.319) 0:01:33.016 ***** 2026-02-14 03:06:05.769976 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:05.769983 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:05.769991 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:05.769999 | orchestrator | 2026-02-14 03:06:05.770007 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-14 03:06:05.770065 | orchestrator | Saturday 14 February 2026 03:06:00 +0000 (0:00:01.933) 0:01:34.950 ***** 2026-02-14 03:06:05.770075 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:05.770083 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:05.770091 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:05.770099 | orchestrator | 2026-02-14 03:06:05.770107 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-14 03:06:05.770115 | orchestrator | Saturday 14 February 2026 03:06:01 +0000 (0:00:00.303) 0:01:35.254 ***** 2026-02-14 03:06:05.770124 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:05.770132 | orchestrator | 2026-02-14 03:06:05.770140 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-14 03:06:05.770148 | orchestrator | Saturday 14 February 2026 03:06:02 +0000 (0:00:01.001) 0:01:36.256 ***** 2026-02-14 03:06:05.770166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 03:06:05.770186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:08.619930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 03:06:08.620040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:08.620106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 03:06:08.620123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:08.620144 | orchestrator | 2026-02-14 03:06:08.620158 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-14 03:06:08.620171 | orchestrator | Saturday 14 February 2026 03:06:05 +0000 (0:00:03.803) 0:01:40.059 ***** 2026-02-14 03:06:08.620196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 03:06:08.716318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:08.716543 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:08.716575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 03:06:08.716654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:08.716694 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:08.716717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 03:06:08.716760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 03:06:20.271286 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:20.271488 | orchestrator | 2026-02-14 03:06:20.271521 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-14 03:06:20.271543 | orchestrator | Saturday 14 February 2026 03:06:08 +0000 (0:00:02.841) 0:01:42.901 ***** 2026-02-14 03:06:20.271567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271619 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:20.271631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271642 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:20.271654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 03:06:20.271694 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:20.271705 | orchestrator | 2026-02-14 03:06:20.271716 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-14 03:06:20.271728 | orchestrator | Saturday 14 February 2026 03:06:12 +0000 (0:00:03.589) 0:01:46.490 ***** 2026-02-14 03:06:20.271764 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:20.271776 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:20.271786 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:20.271797 | orchestrator | 2026-02-14 03:06:20.271810 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-14 03:06:20.271823 | orchestrator | Saturday 14 February 2026 03:06:13 +0000 (0:00:01.324) 0:01:47.815 ***** 2026-02-14 03:06:20.271835 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:20.271848 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:20.271861 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:20.271874 | orchestrator | 2026-02-14 03:06:20.271887 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-14 03:06:20.271920 | orchestrator | Saturday 14 February 2026 03:06:15 +0000 (0:00:02.013) 0:01:49.828 ***** 2026-02-14 03:06:20.271933 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:20.271946 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:20.271959 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:20.271972 | orchestrator | 2026-02-14 03:06:20.271985 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-14 03:06:20.271997 | orchestrator | Saturday 14 February 2026 03:06:15 +0000 (0:00:00.304) 0:01:50.133 ***** 2026-02-14 03:06:20.272009 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:20.272022 | orchestrator | 2026-02-14 03:06:20.272035 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-14 03:06:20.272047 | orchestrator | Saturday 14 February 2026 03:06:16 +0000 (0:00:01.020) 0:01:51.153 ***** 2026-02-14 03:06:20.272061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 03:06:20.272076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 03:06:20.272091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 03:06:20.272104 | orchestrator | 2026-02-14 03:06:20.272117 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-14 03:06:20.272137 | orchestrator | Saturday 14 February 2026 03:06:19 +0000 (0:00:02.922) 0:01:54.076 ***** 2026-02-14 03:06:20.272151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 03:06:20.272165 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:20.272187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 03:06:28.765945 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:28.766124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 03:06:28.766242 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:28.766478 | orchestrator | 2026-02-14 03:06:28.766507 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-14 03:06:28.766527 | orchestrator | Saturday 14 February 2026 03:06:20 +0000 (0:00:00.375) 0:01:54.451 ***** 2026-02-14 03:06:28.766546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766585 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:28.766602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766638 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:28.766656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-14 03:06:28.766703 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:28.766714 | orchestrator | 2026-02-14 03:06:28.766726 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-14 03:06:28.766738 | orchestrator | Saturday 14 February 2026 03:06:21 +0000 (0:00:00.869) 0:01:55.321 ***** 2026-02-14 03:06:28.766749 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:28.766761 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:28.766772 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:28.766783 | orchestrator | 2026-02-14 03:06:28.766794 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-14 03:06:28.766805 | orchestrator | Saturday 14 February 2026 03:06:22 +0000 (0:00:01.272) 0:01:56.594 ***** 2026-02-14 03:06:28.766816 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:28.766827 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:28.766837 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:28.766846 | orchestrator | 2026-02-14 03:06:28.766856 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-14 03:06:28.766889 | orchestrator | Saturday 14 February 2026 03:06:24 +0000 (0:00:01.936) 0:01:58.530 ***** 2026-02-14 03:06:28.766900 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:28.766910 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:28.766919 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:28.766929 | orchestrator | 2026-02-14 03:06:28.766939 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-14 03:06:28.766948 | orchestrator | Saturday 14 February 2026 03:06:24 +0000 (0:00:00.303) 0:01:58.833 ***** 2026-02-14 03:06:28.766958 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:28.766968 | orchestrator | 2026-02-14 03:06:28.766978 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-14 03:06:28.766987 | orchestrator | Saturday 14 February 2026 03:06:25 +0000 (0:00:01.046) 0:01:59.880 ***** 2026-02-14 03:06:28.767032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 03:06:28.767088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 03:06:28.767120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 03:06:30.357019 | orchestrator | 2026-02-14 03:06:30.357124 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-14 03:06:30.357142 | orchestrator | Saturday 14 February 2026 03:06:28 +0000 (0:00:03.063) 0:02:02.944 ***** 2026-02-14 03:06:30.357178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 03:06:30.357196 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:30.357231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 03:06:30.357266 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:30.357286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 03:06:30.357299 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:30.357310 | orchestrator | 2026-02-14 03:06:30.357322 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-14 03:06:30.357333 | orchestrator | Saturday 14 February 2026 03:06:29 +0000 (0:00:00.683) 0:02:03.627 ***** 2026-02-14 03:06:30.357345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:30.357366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:30.357380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:30.357449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:38.765947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 03:06:38.766107 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:38.766128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:38.766143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:38.766175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:38.766190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:38.766203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 03:06:38.766214 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:38.766225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:38.766235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:38.766246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-14 03:06:38.766279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 03:06:38.766291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 03:06:38.766302 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:38.766312 | orchestrator | 2026-02-14 03:06:38.766324 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-14 03:06:38.766336 | orchestrator | Saturday 14 February 2026 03:06:30 +0000 (0:00:00.912) 0:02:04.539 ***** 2026-02-14 03:06:38.766346 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:38.766356 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:38.766365 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:38.766374 | orchestrator | 2026-02-14 03:06:38.766490 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-14 03:06:38.766506 | orchestrator | Saturday 14 February 2026 03:06:31 +0000 (0:00:01.566) 0:02:06.106 ***** 2026-02-14 03:06:38.766517 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:38.766528 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:38.766538 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:38.766549 | orchestrator | 2026-02-14 03:06:38.766559 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-14 03:06:38.766569 | orchestrator | Saturday 14 February 2026 03:06:33 +0000 (0:00:01.970) 0:02:08.076 ***** 2026-02-14 03:06:38.766579 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:38.766590 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:38.766623 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:38.766634 | orchestrator | 2026-02-14 03:06:38.766644 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-14 03:06:38.766654 | orchestrator | Saturday 14 February 2026 03:06:34 +0000 (0:00:00.325) 0:02:08.402 ***** 2026-02-14 03:06:38.766664 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:38.766673 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:38.766682 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:38.766692 | orchestrator | 2026-02-14 03:06:38.766702 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-14 03:06:38.766713 | orchestrator | Saturday 14 February 2026 03:06:34 +0000 (0:00:00.304) 0:02:08.707 ***** 2026-02-14 03:06:38.766724 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:38.766734 | orchestrator | 2026-02-14 03:06:38.766744 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-14 03:06:38.766755 | orchestrator | Saturday 14 February 2026 03:06:35 +0000 (0:00:01.078) 0:02:09.786 ***** 2026-02-14 03:06:38.766781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:06:38.766811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:38.766824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:38.766836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:06:38.766857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:39.348893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:39.348999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:06:39.349037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:39.349051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:39.349064 | orchestrator | 2026-02-14 03:06:39.349077 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-14 03:06:39.349089 | orchestrator | Saturday 14 February 2026 03:06:38 +0000 (0:00:03.158) 0:02:12.945 ***** 2026-02-14 03:06:39.349120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:06:39.349140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:39.349152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:39.349176 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:39.349190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:06:39.349202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:39.349214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:39.349240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:06:48.350285 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:48.350405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:06:48.350426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:06:48.350491 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:48.350503 | orchestrator | 2026-02-14 03:06:48.350516 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-14 03:06:48.350529 | orchestrator | Saturday 14 February 2026 03:06:39 +0000 (0:00:00.585) 0:02:13.530 ***** 2026-02-14 03:06:48.350542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350570 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:48.350582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350605 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:48.350616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-14 03:06:48.350639 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:48.350650 | orchestrator | 2026-02-14 03:06:48.350661 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-14 03:06:48.350672 | orchestrator | Saturday 14 February 2026 03:06:40 +0000 (0:00:01.131) 0:02:14.662 ***** 2026-02-14 03:06:48.350683 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:48.350694 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:48.350737 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:48.350749 | orchestrator | 2026-02-14 03:06:48.350760 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-14 03:06:48.350772 | orchestrator | Saturday 14 February 2026 03:06:41 +0000 (0:00:01.267) 0:02:15.930 ***** 2026-02-14 03:06:48.350783 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:48.350793 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:48.350804 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:48.350815 | orchestrator | 2026-02-14 03:06:48.350826 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-14 03:06:48.350837 | orchestrator | Saturday 14 February 2026 03:06:43 +0000 (0:00:01.965) 0:02:17.895 ***** 2026-02-14 03:06:48.350848 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:48.350874 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:48.350886 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:48.350897 | orchestrator | 2026-02-14 03:06:48.350928 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-14 03:06:48.350940 | orchestrator | Saturday 14 February 2026 03:06:44 +0000 (0:00:00.309) 0:02:18.204 ***** 2026-02-14 03:06:48.350951 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:48.350962 | orchestrator | 2026-02-14 03:06:48.350973 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-14 03:06:48.350984 | orchestrator | Saturday 14 February 2026 03:06:45 +0000 (0:00:01.162) 0:02:19.366 ***** 2026-02-14 03:06:48.350996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 03:06:48.351013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:48.351026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 03:06:48.351046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:48.351067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 03:06:53.494497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:53.494617 | orchestrator | 2026-02-14 03:06:53.494635 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-14 03:06:53.494648 | orchestrator | Saturday 14 February 2026 03:06:48 +0000 (0:00:03.160) 0:02:22.527 ***** 2026-02-14 03:06:53.494663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 03:06:53.494724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:53.494763 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:53.494781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 03:06:53.494815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:53.494827 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:53.494839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 03:06:53.494850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:06:53.494870 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:53.494881 | orchestrator | 2026-02-14 03:06:53.494892 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-14 03:06:53.494903 | orchestrator | Saturday 14 February 2026 03:06:48 +0000 (0:00:00.654) 0:02:23.181 ***** 2026-02-14 03:06:53.494915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.494928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.494941 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:53.494952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.494963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.494980 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:53.495000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.495017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-14 03:06:53.495036 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:06:53.495057 | orchestrator | 2026-02-14 03:06:53.495084 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-14 03:06:53.495103 | orchestrator | Saturday 14 February 2026 03:06:49 +0000 (0:00:00.902) 0:02:24.084 ***** 2026-02-14 03:06:53.495115 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:53.495126 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:53.495137 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:53.495148 | orchestrator | 2026-02-14 03:06:53.495159 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-14 03:06:53.495170 | orchestrator | Saturday 14 February 2026 03:06:51 +0000 (0:00:01.563) 0:02:25.647 ***** 2026-02-14 03:06:53.495181 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:06:53.495192 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:06:53.495203 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:06:53.495213 | orchestrator | 2026-02-14 03:06:53.495224 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-14 03:06:53.495244 | orchestrator | Saturday 14 February 2026 03:06:53 +0000 (0:00:02.023) 0:02:27.671 ***** 2026-02-14 03:06:58.102508 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:06:58.102652 | orchestrator | 2026-02-14 03:06:58.102679 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-14 03:06:58.102696 | orchestrator | Saturday 14 February 2026 03:06:54 +0000 (0:00:01.044) 0:02:28.716 ***** 2026-02-14 03:06:58.102718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 03:06:58.102781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 03:06:58.102803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.102823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.102862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.102906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.102926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.102957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.103001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 03:06:58.103022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.103052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:58.103084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035046 | orchestrator | 2026-02-14 03:06:59.035184 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-14 03:06:59.035204 | orchestrator | Saturday 14 February 2026 03:06:58 +0000 (0:00:03.654) 0:02:32.371 ***** 2026-02-14 03:06:59.035243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 03:06:59.035259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035297 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:06:59.035324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 03:06:59.035360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035406 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:06:59.035427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 03:06:59.035479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 03:06:59.035542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 03:07:09.889297 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:09.889409 | orchestrator | 2026-02-14 03:07:09.889425 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-14 03:07:09.889436 | orchestrator | Saturday 14 February 2026 03:06:59 +0000 (0:00:00.937) 0:02:33.308 ***** 2026-02-14 03:07:09.889447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889509 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:09.889530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889562 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:09.889572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-14 03:07:09.889593 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:09.889603 | orchestrator | 2026-02-14 03:07:09.889613 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-14 03:07:09.889623 | orchestrator | Saturday 14 February 2026 03:06:59 +0000 (0:00:00.883) 0:02:34.191 ***** 2026-02-14 03:07:09.889633 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:09.889643 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:09.889653 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:09.889662 | orchestrator | 2026-02-14 03:07:09.889672 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-14 03:07:09.889682 | orchestrator | Saturday 14 February 2026 03:07:01 +0000 (0:00:01.268) 0:02:35.460 ***** 2026-02-14 03:07:09.889692 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:09.889702 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:09.889712 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:09.889722 | orchestrator | 2026-02-14 03:07:09.889731 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-14 03:07:09.889741 | orchestrator | Saturday 14 February 2026 03:07:03 +0000 (0:00:01.994) 0:02:37.454 ***** 2026-02-14 03:07:09.889751 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:07:09.889761 | orchestrator | 2026-02-14 03:07:09.889771 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-14 03:07:09.889781 | orchestrator | Saturday 14 February 2026 03:07:04 +0000 (0:00:01.266) 0:02:38.720 ***** 2026-02-14 03:07:09.889791 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 03:07:09.889801 | orchestrator | 2026-02-14 03:07:09.889811 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-14 03:07:09.889848 | orchestrator | Saturday 14 February 2026 03:07:07 +0000 (0:00:03.106) 0:02:41.827 ***** 2026-02-14 03:07:09.889896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:09.889912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:09.889923 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:09.889938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:09.889957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:09.889968 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:09.889988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:12.310279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:12.310397 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:12.310423 | orchestrator | 2026-02-14 03:07:12.310444 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-14 03:07:12.310464 | orchestrator | Saturday 14 February 2026 03:07:09 +0000 (0:00:02.236) 0:02:44.063 ***** 2026-02-14 03:07:12.310598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:12.310616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:12.310629 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:12.310663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:12.310697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:12.310709 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:12.310721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:07:12.310741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 03:07:21.752121 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:21.752255 | orchestrator | 2026-02-14 03:07:21.752282 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-14 03:07:21.752305 | orchestrator | Saturday 14 February 2026 03:07:12 +0000 (0:00:02.424) 0:02:46.488 ***** 2026-02-14 03:07:21.752328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752408 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:21.752420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752444 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:21.752455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 03:07:21.752478 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:21.752522 | orchestrator | 2026-02-14 03:07:21.752536 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-14 03:07:21.752547 | orchestrator | Saturday 14 February 2026 03:07:15 +0000 (0:00:02.755) 0:02:49.244 ***** 2026-02-14 03:07:21.752558 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:21.752598 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:21.752610 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:21.752621 | orchestrator | 2026-02-14 03:07:21.752632 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-14 03:07:21.752644 | orchestrator | Saturday 14 February 2026 03:07:17 +0000 (0:00:01.995) 0:02:51.239 ***** 2026-02-14 03:07:21.752658 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:21.752670 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:21.752688 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:21.752707 | orchestrator | 2026-02-14 03:07:21.752727 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-14 03:07:21.752747 | orchestrator | Saturday 14 February 2026 03:07:18 +0000 (0:00:01.380) 0:02:52.619 ***** 2026-02-14 03:07:21.752767 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:21.752786 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:21.752804 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:21.752815 | orchestrator | 2026-02-14 03:07:21.752826 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-14 03:07:21.752837 | orchestrator | Saturday 14 February 2026 03:07:18 +0000 (0:00:00.307) 0:02:52.927 ***** 2026-02-14 03:07:21.752847 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:07:21.752859 | orchestrator | 2026-02-14 03:07:21.752871 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-14 03:07:21.752890 | orchestrator | Saturday 14 February 2026 03:07:20 +0000 (0:00:01.310) 0:02:54.238 ***** 2026-02-14 03:07:21.752919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 03:07:21.752944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 03:07:21.752958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 03:07:21.752970 | orchestrator | 2026-02-14 03:07:21.752981 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-14 03:07:21.753009 | orchestrator | Saturday 14 February 2026 03:07:21 +0000 (0:00:01.507) 0:02:55.746 ***** 2026-02-14 03:07:21.753030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 03:07:29.888926 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:29.889045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 03:07:29.889065 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:29.889079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 03:07:29.889092 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:29.889103 | orchestrator | 2026-02-14 03:07:29.889116 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-14 03:07:29.889129 | orchestrator | Saturday 14 February 2026 03:07:21 +0000 (0:00:00.375) 0:02:56.122 ***** 2026-02-14 03:07:29.889141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 03:07:29.889155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 03:07:29.889166 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:29.889178 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:29.889189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 03:07:29.889225 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:29.889237 | orchestrator | 2026-02-14 03:07:29.889289 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-14 03:07:29.889302 | orchestrator | Saturday 14 February 2026 03:07:22 +0000 (0:00:00.847) 0:02:56.969 ***** 2026-02-14 03:07:29.889313 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:29.889324 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:29.889335 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:29.889346 | orchestrator | 2026-02-14 03:07:29.889357 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-14 03:07:29.889368 | orchestrator | Saturday 14 February 2026 03:07:23 +0000 (0:00:00.435) 0:02:57.404 ***** 2026-02-14 03:07:29.889379 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:29.889390 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:29.889401 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:29.889412 | orchestrator | 2026-02-14 03:07:29.889423 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-14 03:07:29.889434 | orchestrator | Saturday 14 February 2026 03:07:24 +0000 (0:00:01.241) 0:02:58.646 ***** 2026-02-14 03:07:29.889445 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:29.889459 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:29.889472 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:29.889485 | orchestrator | 2026-02-14 03:07:29.889498 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-14 03:07:29.889543 | orchestrator | Saturday 14 February 2026 03:07:24 +0000 (0:00:00.318) 0:02:58.965 ***** 2026-02-14 03:07:29.889557 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:07:29.889569 | orchestrator | 2026-02-14 03:07:29.889582 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-14 03:07:29.889594 | orchestrator | Saturday 14 February 2026 03:07:26 +0000 (0:00:01.422) 0:03:00.387 ***** 2026-02-14 03:07:29.889626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:07:29.889647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.889662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.889690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.889705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:29.889729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.972548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:29.972665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:29.972684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.972718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:29.972732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.972744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:29.972774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:29.972794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:07:29.972807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.972826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:29.972839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:29.972862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.083974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:30.084088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:07:30.084125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.084139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:30.084152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.084182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.084200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.084220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.084233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:30.084246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:30.084258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:30.084280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:30.196454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:30.196468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:30.196482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:30.196605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:30.196618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:30.196630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:30.196654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:30.196673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:31.328987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.329095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.329113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.329127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:31.329142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.329154 | orchestrator | 2026-02-14 03:07:31.329168 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-14 03:07:31.329201 | orchestrator | Saturday 14 February 2026 03:07:30 +0000 (0:00:04.087) 0:03:04.474 ***** 2026-02-14 03:07:31.329240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:07:31.329255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.329269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.329281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.329292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:07:31.329324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:31.411982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.412084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.412145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:31.412153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.412178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.412189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.412202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.474717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:31.474824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.474841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.474855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:07:31.474895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.474943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.474957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.474970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.474983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:31.475006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:31.475018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.475038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.713264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.713357 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:31.713370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.713424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.713457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-14 03:07:31.713471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:31.713497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.713547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.713558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.713573 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:31.713580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:31.713588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:31.713599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:31.713612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:41.567046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-14 03:07:41.567167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-14 03:07:41.567185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 03:07:41.567223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 03:07:41.567256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:07:41.567270 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:41.567284 | orchestrator | 2026-02-14 03:07:41.567296 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-14 03:07:41.567309 | orchestrator | Saturday 14 February 2026 03:07:31 +0000 (0:00:01.420) 0:03:05.895 ***** 2026-02-14 03:07:41.567322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567347 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:41.567376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567399 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:41.567410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-14 03:07:41.567441 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:41.567452 | orchestrator | 2026-02-14 03:07:41.567463 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-14 03:07:41.567474 | orchestrator | Saturday 14 February 2026 03:07:33 +0000 (0:00:01.916) 0:03:07.811 ***** 2026-02-14 03:07:41.567486 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:41.567506 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:41.567554 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:41.567575 | orchestrator | 2026-02-14 03:07:41.567594 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-14 03:07:41.567613 | orchestrator | Saturday 14 February 2026 03:07:34 +0000 (0:00:01.294) 0:03:09.106 ***** 2026-02-14 03:07:41.567632 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:41.567650 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:41.567670 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:41.567688 | orchestrator | 2026-02-14 03:07:41.567708 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-14 03:07:41.567720 | orchestrator | Saturday 14 February 2026 03:07:36 +0000 (0:00:01.984) 0:03:11.091 ***** 2026-02-14 03:07:41.567732 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:07:41.567742 | orchestrator | 2026-02-14 03:07:41.567753 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-14 03:07:41.567764 | orchestrator | Saturday 14 February 2026 03:07:38 +0000 (0:00:01.225) 0:03:12.317 ***** 2026-02-14 03:07:41.567776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:41.567798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:41.567822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:51.950574 | orchestrator | 2026-02-14 03:07:51.950714 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-14 03:07:51.950732 | orchestrator | Saturday 14 February 2026 03:07:41 +0000 (0:00:03.423) 0:03:15.741 ***** 2026-02-14 03:07:51.950748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:07:51.951446 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:51.951497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:07:51.951524 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:51.951585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:07:51.951600 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:51.951612 | orchestrator | 2026-02-14 03:07:51.951625 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-14 03:07:51.951637 | orchestrator | Saturday 14 February 2026 03:07:42 +0000 (0:00:00.486) 0:03:16.228 ***** 2026-02-14 03:07:51.951650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951703 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:51.951714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951791 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:07:51.951802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-14 03:07:51.951825 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:07:51.951836 | orchestrator | 2026-02-14 03:07:51.951848 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-14 03:07:51.951859 | orchestrator | Saturday 14 February 2026 03:07:42 +0000 (0:00:00.767) 0:03:16.995 ***** 2026-02-14 03:07:51.951870 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:51.951881 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:51.951892 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:51.951903 | orchestrator | 2026-02-14 03:07:51.951914 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-14 03:07:51.951926 | orchestrator | Saturday 14 February 2026 03:07:44 +0000 (0:00:01.801) 0:03:18.797 ***** 2026-02-14 03:07:51.951937 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:07:51.951948 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:07:51.951959 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:07:51.951970 | orchestrator | 2026-02-14 03:07:51.951981 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-14 03:07:51.951993 | orchestrator | Saturday 14 February 2026 03:07:46 +0000 (0:00:01.793) 0:03:20.591 ***** 2026-02-14 03:07:51.952004 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:07:51.952015 | orchestrator | 2026-02-14 03:07:51.952026 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-14 03:07:51.952037 | orchestrator | Saturday 14 February 2026 03:07:47 +0000 (0:00:01.481) 0:03:22.072 ***** 2026-02-14 03:07:51.952054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:51.952085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:07:51.952098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:07:51.952120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:53.168030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:07:53.168243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168282 | orchestrator | 2026-02-14 03:07:53.168295 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-14 03:07:53.168307 | orchestrator | Saturday 14 February 2026 03:07:51 +0000 (0:00:04.056) 0:03:26.128 ***** 2026-02-14 03:07:53.168341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:07:53.168364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:07:53.168394 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:07:53.168407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:07:53.168426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:08:03.692331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:08:03.692470 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:03.692519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:08:03.692601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:08:03.692623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:08:03.692640 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:03.692656 | orchestrator | 2026-02-14 03:08:03.692674 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-14 03:08:03.692692 | orchestrator | Saturday 14 February 2026 03:07:53 +0000 (0:00:01.215) 0:03:27.344 ***** 2026-02-14 03:08:03.692709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692808 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:03.692826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692908 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:03.692925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.692986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-14 03:08:03.693003 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:03.693019 | orchestrator | 2026-02-14 03:08:03.693035 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-14 03:08:03.693052 | orchestrator | Saturday 14 February 2026 03:07:54 +0000 (0:00:00.896) 0:03:28.240 ***** 2026-02-14 03:08:03.693068 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:03.693084 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:03.693099 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:03.693116 | orchestrator | 2026-02-14 03:08:03.693133 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-14 03:08:03.693150 | orchestrator | Saturday 14 February 2026 03:07:55 +0000 (0:00:01.375) 0:03:29.616 ***** 2026-02-14 03:08:03.693167 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:03.693184 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:03.693200 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:03.693216 | orchestrator | 2026-02-14 03:08:03.693232 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-14 03:08:03.693249 | orchestrator | Saturday 14 February 2026 03:07:57 +0000 (0:00:02.089) 0:03:31.706 ***** 2026-02-14 03:08:03.693265 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:08:03.693281 | orchestrator | 2026-02-14 03:08:03.693298 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-14 03:08:03.693315 | orchestrator | Saturday 14 February 2026 03:07:59 +0000 (0:00:01.539) 0:03:33.245 ***** 2026-02-14 03:08:03.693331 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-14 03:08:03.693349 | orchestrator | 2026-02-14 03:08:03.693366 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-14 03:08:03.693382 | orchestrator | Saturday 14 February 2026 03:07:59 +0000 (0:00:00.820) 0:03:34.066 ***** 2026-02-14 03:08:03.693401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 03:08:03.693450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 03:08:15.152643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 03:08:15.152763 | orchestrator | 2026-02-14 03:08:15.152782 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-14 03:08:15.152796 | orchestrator | Saturday 14 February 2026 03:08:03 +0000 (0:00:03.804) 0:03:37.870 ***** 2026-02-14 03:08:15.152810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.152822 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:15.152851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.152863 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:15.152874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.152885 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:15.152896 | orchestrator | 2026-02-14 03:08:15.152908 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-14 03:08:15.152920 | orchestrator | Saturday 14 February 2026 03:08:05 +0000 (0:00:01.353) 0:03:39.224 ***** 2026-02-14 03:08:15.152932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.152947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.152983 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:15.152995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.153006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.153017 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:15.153028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.153040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 03:08:15.153067 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:15.153079 | orchestrator | 2026-02-14 03:08:15.153090 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 03:08:15.153101 | orchestrator | Saturday 14 February 2026 03:08:06 +0000 (0:00:01.473) 0:03:40.698 ***** 2026-02-14 03:08:15.153112 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:15.153123 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:15.153136 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:15.153148 | orchestrator | 2026-02-14 03:08:15.153160 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 03:08:15.153173 | orchestrator | Saturday 14 February 2026 03:08:08 +0000 (0:00:02.427) 0:03:43.125 ***** 2026-02-14 03:08:15.153185 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:15.153198 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:15.153210 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:15.153222 | orchestrator | 2026-02-14 03:08:15.153234 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-14 03:08:15.153246 | orchestrator | Saturday 14 February 2026 03:08:11 +0000 (0:00:02.888) 0:03:46.014 ***** 2026-02-14 03:08:15.153260 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-14 03:08:15.153273 | orchestrator | 2026-02-14 03:08:15.153285 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-14 03:08:15.153298 | orchestrator | Saturday 14 February 2026 03:08:12 +0000 (0:00:01.056) 0:03:47.070 ***** 2026-02-14 03:08:15.153317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.153329 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:15.153341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.153362 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:15.153373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.153385 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:15.153395 | orchestrator | 2026-02-14 03:08:15.153406 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-14 03:08:15.153417 | orchestrator | Saturday 14 February 2026 03:08:13 +0000 (0:00:01.021) 0:03:48.091 ***** 2026-02-14 03:08:15.153428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.153439 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:15.153450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:15.153468 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:37.414173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 03:08:37.414289 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:37.414308 | orchestrator | 2026-02-14 03:08:37.414321 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-14 03:08:37.414334 | orchestrator | Saturday 14 February 2026 03:08:15 +0000 (0:00:01.234) 0:03:49.326 ***** 2026-02-14 03:08:37.414347 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:37.414358 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:37.414369 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:37.414380 | orchestrator | 2026-02-14 03:08:37.414391 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 03:08:37.414403 | orchestrator | Saturday 14 February 2026 03:08:16 +0000 (0:00:01.472) 0:03:50.798 ***** 2026-02-14 03:08:37.414414 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:08:37.414426 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:08:37.414437 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:08:37.414448 | orchestrator | 2026-02-14 03:08:37.414459 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 03:08:37.414471 | orchestrator | Saturday 14 February 2026 03:08:19 +0000 (0:00:02.580) 0:03:53.379 ***** 2026-02-14 03:08:37.414506 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:08:37.414518 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:08:37.414529 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:08:37.414540 | orchestrator | 2026-02-14 03:08:37.414565 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-14 03:08:37.414576 | orchestrator | Saturday 14 February 2026 03:08:21 +0000 (0:00:02.587) 0:03:55.967 ***** 2026-02-14 03:08:37.414588 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-14 03:08:37.414600 | orchestrator | 2026-02-14 03:08:37.414611 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-14 03:08:37.414659 | orchestrator | Saturday 14 February 2026 03:08:22 +0000 (0:00:01.160) 0:03:57.127 ***** 2026-02-14 03:08:37.414675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414689 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:37.414703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414717 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:37.414729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414743 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:37.414755 | orchestrator | 2026-02-14 03:08:37.414768 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-14 03:08:37.414783 | orchestrator | Saturday 14 February 2026 03:08:24 +0000 (0:00:01.218) 0:03:58.346 ***** 2026-02-14 03:08:37.414814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414828 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:37.414839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414872 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:37.414884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 03:08:37.414895 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:37.414907 | orchestrator | 2026-02-14 03:08:37.414923 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-14 03:08:37.414934 | orchestrator | Saturday 14 February 2026 03:08:25 +0000 (0:00:01.298) 0:03:59.644 ***** 2026-02-14 03:08:37.414945 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:37.414956 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:37.414967 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:37.414978 | orchestrator | 2026-02-14 03:08:37.414989 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 03:08:37.415000 | orchestrator | Saturday 14 February 2026 03:08:27 +0000 (0:00:01.803) 0:04:01.447 ***** 2026-02-14 03:08:37.415012 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:08:37.415023 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:08:37.415034 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:08:37.415045 | orchestrator | 2026-02-14 03:08:37.415055 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 03:08:37.415066 | orchestrator | Saturday 14 February 2026 03:08:29 +0000 (0:00:02.296) 0:04:03.744 ***** 2026-02-14 03:08:37.415078 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:08:37.415088 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:08:37.415099 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:08:37.415114 | orchestrator | 2026-02-14 03:08:37.415132 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-14 03:08:37.415151 | orchestrator | Saturday 14 February 2026 03:08:32 +0000 (0:00:03.151) 0:04:06.895 ***** 2026-02-14 03:08:37.415168 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:08:37.415187 | orchestrator | 2026-02-14 03:08:37.415204 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-14 03:08:37.415222 | orchestrator | Saturday 14 February 2026 03:08:34 +0000 (0:00:01.557) 0:04:08.453 ***** 2026-02-14 03:08:37.415242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 03:08:37.415263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:37.415309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.104771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.104912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:38.104932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 03:08:38.104946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:38.104960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.104993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.105022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:38.105036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 03:08:38.105047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:38.105059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.105070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.105124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:38.105138 | orchestrator | 2026-02-14 03:08:38.105155 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-14 03:08:38.105174 | orchestrator | Saturday 14 February 2026 03:08:37 +0000 (0:00:03.276) 0:04:11.730 ***** 2026-02-14 03:08:38.105209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 03:08:38.246386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:38.246487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.246503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.246517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:38.246550 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:38.246564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 03:08:38.246577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:38.246669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.246685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.246697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:38.246716 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:38.246727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 03:08:38.246739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 03:08:38.246751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 03:08:38.246776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 03:08:49.434998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 03:08:49.435096 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:49.435109 | orchestrator | 2026-02-14 03:08:49.435119 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-14 03:08:49.435128 | orchestrator | Saturday 14 February 2026 03:08:38 +0000 (0:00:00.699) 0:04:12.429 ***** 2026-02-14 03:08:49.435137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435177 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:49.435185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435200 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:49.435208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 03:08:49.435223 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:49.435231 | orchestrator | 2026-02-14 03:08:49.435239 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-14 03:08:49.435246 | orchestrator | Saturday 14 February 2026 03:08:39 +0000 (0:00:00.887) 0:04:13.317 ***** 2026-02-14 03:08:49.435254 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:49.435262 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:49.435269 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:49.435277 | orchestrator | 2026-02-14 03:08:49.435285 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-14 03:08:49.435292 | orchestrator | Saturday 14 February 2026 03:08:40 +0000 (0:00:01.698) 0:04:15.016 ***** 2026-02-14 03:08:49.435300 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:08:49.435307 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:08:49.435316 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:08:49.435323 | orchestrator | 2026-02-14 03:08:49.435331 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-14 03:08:49.435338 | orchestrator | Saturday 14 February 2026 03:08:42 +0000 (0:00:02.043) 0:04:17.059 ***** 2026-02-14 03:08:49.435346 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:08:49.435355 | orchestrator | 2026-02-14 03:08:49.435362 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-14 03:08:49.435370 | orchestrator | Saturday 14 February 2026 03:08:44 +0000 (0:00:01.381) 0:04:18.440 ***** 2026-02-14 03:08:49.435392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:08:49.435420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:08:49.435435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:08:49.435445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:08:49.435458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:08:49.435474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:08:51.364207 | orchestrator | 2026-02-14 03:08:51.364280 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-14 03:08:51.364289 | orchestrator | Saturday 14 February 2026 03:08:49 +0000 (0:00:05.161) 0:04:23.602 ***** 2026-02-14 03:08:51.364298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:08:51.364308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:08:51.364316 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:51.364334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:08:51.364341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:08:51.364376 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:51.364383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:08:51.364389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:08:51.364395 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:51.364401 | orchestrator | 2026-02-14 03:08:51.364407 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-14 03:08:51.364413 | orchestrator | Saturday 14 February 2026 03:08:50 +0000 (0:00:01.027) 0:04:24.629 ***** 2026-02-14 03:08:51.364419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-14 03:08:51.364427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:51.364435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:51.364447 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:51.364456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-14 03:08:51.364462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:51.364468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:51.364473 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:51.364479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-14 03:08:51.364484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:51.364498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-14 03:08:57.329806 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:57.329972 | orchestrator | 2026-02-14 03:08:57.330000 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-14 03:08:57.330082 | orchestrator | Saturday 14 February 2026 03:08:51 +0000 (0:00:00.918) 0:04:25.548 ***** 2026-02-14 03:08:57.330104 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:57.330123 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:57.330143 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:57.330162 | orchestrator | 2026-02-14 03:08:57.330182 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-14 03:08:57.330202 | orchestrator | Saturday 14 February 2026 03:08:51 +0000 (0:00:00.423) 0:04:25.972 ***** 2026-02-14 03:08:57.330221 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:57.330240 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:08:57.330260 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:08:57.330279 | orchestrator | 2026-02-14 03:08:57.330298 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-14 03:08:57.330319 | orchestrator | Saturday 14 February 2026 03:08:53 +0000 (0:00:01.393) 0:04:27.365 ***** 2026-02-14 03:08:57.330339 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:08:57.330359 | orchestrator | 2026-02-14 03:08:57.330377 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-14 03:08:57.330395 | orchestrator | Saturday 14 February 2026 03:08:54 +0000 (0:00:01.728) 0:04:29.093 ***** 2026-02-14 03:08:57.330421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 03:08:57.330479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:57.330521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:57.330543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:57.330564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:08:57.330610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 03:08:57.330633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:57.330654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:57.330715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:57.330735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:08:57.330763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 03:08:57.330784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:57.330815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.885879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.885983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:08:58.886084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 03:08:58.886120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:08:58.886134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.886146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.886178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:08:58.886191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 03:08:58.886212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:08:58.886229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.886241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:58.886253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:08:58.886273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 03:08:59.588888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:08:59.588976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.589004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.589014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:08:59.589023 | orchestrator | 2026-02-14 03:08:59.589033 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-14 03:08:59.589042 | orchestrator | Saturday 14 February 2026 03:08:59 +0000 (0:00:04.126) 0:04:33.220 ***** 2026-02-14 03:08:59.589052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-14 03:08:59.589061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:59.589114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.589124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.589134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:08:59.589150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-14 03:08:59.589160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:08:59.589169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.589190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.703958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-14 03:08:59.704081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:08:59.704125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:59.704180 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:08:59.704197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.704210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.704224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:08:59.704292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-14 03:08:59.704316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-14 03:08:59.704343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:08:59.704363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 03:08:59.704381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.704409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:08:59.704438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:09:01.282580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:09:01.282731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:09:01.282768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 03:09:01.282782 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:01.282800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-14 03:09:01.282814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-14 03:09:01.282851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:09:01.282883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 03:09:01.282895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 03:09:01.282907 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:01.282919 | orchestrator | 2026-02-14 03:09:01.282931 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-14 03:09:01.282944 | orchestrator | Saturday 14 February 2026 03:08:59 +0000 (0:00:00.812) 0:04:34.033 ***** 2026-02-14 03:09:01.282962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.282977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.282991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:01.283006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:01.283019 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:01.283031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.283053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.283068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:01.283081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:01.283094 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:01.283108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.283121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-14 03:09:01.283134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:01.283155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-14 03:09:08.792817 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:08.792933 | orchestrator | 2026-02-14 03:09:08.792950 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-14 03:09:08.792964 | orchestrator | Saturday 14 February 2026 03:09:01 +0000 (0:00:01.419) 0:04:35.453 ***** 2026-02-14 03:09:08.792975 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:08.792987 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:08.792998 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:08.793009 | orchestrator | 2026-02-14 03:09:08.793020 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-14 03:09:08.793036 | orchestrator | Saturday 14 February 2026 03:09:01 +0000 (0:00:00.437) 0:04:35.890 ***** 2026-02-14 03:09:08.793056 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:08.793075 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:08.793094 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:08.793113 | orchestrator | 2026-02-14 03:09:08.793132 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-14 03:09:08.793173 | orchestrator | Saturday 14 February 2026 03:09:03 +0000 (0:00:01.358) 0:04:37.249 ***** 2026-02-14 03:09:08.793191 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:09:08.793209 | orchestrator | 2026-02-14 03:09:08.793221 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-14 03:09:08.793232 | orchestrator | Saturday 14 February 2026 03:09:04 +0000 (0:00:01.716) 0:04:38.966 ***** 2026-02-14 03:09:08.793247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:09:08.793295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:09:08.793351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:09:08.793367 | orchestrator | 2026-02-14 03:09:08.793381 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-14 03:09:08.793415 | orchestrator | Saturday 14 February 2026 03:09:06 +0000 (0:00:02.123) 0:04:41.089 ***** 2026-02-14 03:09:08.793431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 03:09:08.793461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 03:09:08.793477 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:08.793489 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:08.793503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 03:09:08.793517 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:08.793531 | orchestrator | 2026-02-14 03:09:08.793544 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-14 03:09:08.793555 | orchestrator | Saturday 14 February 2026 03:09:07 +0000 (0:00:00.450) 0:04:41.540 ***** 2026-02-14 03:09:08.793568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 03:09:08.793580 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:08.793591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 03:09:08.793602 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:08.793613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 03:09:08.793624 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:08.793634 | orchestrator | 2026-02-14 03:09:08.793645 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-14 03:09:08.793657 | orchestrator | Saturday 14 February 2026 03:09:08 +0000 (0:00:00.968) 0:04:42.508 ***** 2026-02-14 03:09:08.793674 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:18.578405 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:18.578515 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:18.578532 | orchestrator | 2026-02-14 03:09:18.578546 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-14 03:09:18.578559 | orchestrator | Saturday 14 February 2026 03:09:08 +0000 (0:00:00.467) 0:04:42.976 ***** 2026-02-14 03:09:18.578570 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:18.578608 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:18.578620 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:18.578631 | orchestrator | 2026-02-14 03:09:18.578642 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-14 03:09:18.578653 | orchestrator | Saturday 14 February 2026 03:09:10 +0000 (0:00:01.418) 0:04:44.394 ***** 2026-02-14 03:09:18.578665 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:09:18.578676 | orchestrator | 2026-02-14 03:09:18.578688 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-14 03:09:18.578726 | orchestrator | Saturday 14 February 2026 03:09:11 +0000 (0:00:01.490) 0:04:45.884 ***** 2026-02-14 03:09:18.578757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 03:09:18.578873 | orchestrator | 2026-02-14 03:09:18.578884 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-14 03:09:18.578896 | orchestrator | Saturday 14 February 2026 03:09:17 +0000 (0:00:06.235) 0:04:52.119 ***** 2026-02-14 03:09:18.578908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 03:09:18.578927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 03:09:24.340897 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:24.340992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 03:09:24.341003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 03:09:24.341011 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:24.341016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 03:09:24.341022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 03:09:24.341043 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:24.341049 | orchestrator | 2026-02-14 03:09:24.341055 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-14 03:09:24.341062 | orchestrator | Saturday 14 February 2026 03:09:18 +0000 (0:00:00.641) 0:04:52.760 ***** 2026-02-14 03:09:24.341079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341108 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:24.341113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341134 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:24.341139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-14 03:09:24.341160 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:24.341165 | orchestrator | 2026-02-14 03:09:24.341175 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-14 03:09:24.341180 | orchestrator | Saturday 14 February 2026 03:09:19 +0000 (0:00:00.921) 0:04:53.682 ***** 2026-02-14 03:09:24.341185 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:09:24.341190 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:09:24.341195 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:09:24.341201 | orchestrator | 2026-02-14 03:09:24.341206 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-14 03:09:24.341211 | orchestrator | Saturday 14 February 2026 03:09:20 +0000 (0:00:01.341) 0:04:55.024 ***** 2026-02-14 03:09:24.341216 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:09:24.341221 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:09:24.341226 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:09:24.341231 | orchestrator | 2026-02-14 03:09:24.341237 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-14 03:09:24.341242 | orchestrator | Saturday 14 February 2026 03:09:23 +0000 (0:00:02.223) 0:04:57.247 ***** 2026-02-14 03:09:24.341247 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:24.341252 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:24.341257 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:24.341263 | orchestrator | 2026-02-14 03:09:24.341268 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-14 03:09:24.341273 | orchestrator | Saturday 14 February 2026 03:09:23 +0000 (0:00:00.642) 0:04:57.890 ***** 2026-02-14 03:09:24.341278 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:24.341283 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:09:24.341289 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:09:24.341294 | orchestrator | 2026-02-14 03:09:24.341299 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-14 03:09:24.341304 | orchestrator | Saturday 14 February 2026 03:09:23 +0000 (0:00:00.305) 0:04:58.195 ***** 2026-02-14 03:09:24.341309 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:09:24.341317 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.024542 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.024663 | orchestrator | 2026-02-14 03:10:08.024680 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-14 03:10:08.024693 | orchestrator | Saturday 14 February 2026 03:09:24 +0000 (0:00:00.331) 0:04:58.526 ***** 2026-02-14 03:10:08.024704 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.024715 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.024726 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.024737 | orchestrator | 2026-02-14 03:10:08.024748 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-14 03:10:08.024759 | orchestrator | Saturday 14 February 2026 03:09:24 +0000 (0:00:00.319) 0:04:58.846 ***** 2026-02-14 03:10:08.024770 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.024781 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.024818 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.024829 | orchestrator | 2026-02-14 03:10:08.024840 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-14 03:10:08.024869 | orchestrator | Saturday 14 February 2026 03:09:25 +0000 (0:00:00.620) 0:04:59.466 ***** 2026-02-14 03:10:08.024881 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.024892 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.024903 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.024914 | orchestrator | 2026-02-14 03:10:08.024925 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-14 03:10:08.024936 | orchestrator | Saturday 14 February 2026 03:09:25 +0000 (0:00:00.566) 0:05:00.033 ***** 2026-02-14 03:10:08.024947 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.024959 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.024970 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.024980 | orchestrator | 2026-02-14 03:10:08.024991 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-14 03:10:08.025024 | orchestrator | Saturday 14 February 2026 03:09:26 +0000 (0:00:00.643) 0:05:00.676 ***** 2026-02-14 03:10:08.025036 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025047 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025060 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025073 | orchestrator | 2026-02-14 03:10:08.025085 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-14 03:10:08.025098 | orchestrator | Saturday 14 February 2026 03:09:27 +0000 (0:00:00.695) 0:05:01.372 ***** 2026-02-14 03:10:08.025111 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025124 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025136 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025149 | orchestrator | 2026-02-14 03:10:08.025162 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-14 03:10:08.025174 | orchestrator | Saturday 14 February 2026 03:09:28 +0000 (0:00:00.841) 0:05:02.213 ***** 2026-02-14 03:10:08.025187 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025200 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025212 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025225 | orchestrator | 2026-02-14 03:10:08.025238 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-14 03:10:08.025251 | orchestrator | Saturday 14 February 2026 03:09:28 +0000 (0:00:00.834) 0:05:03.048 ***** 2026-02-14 03:10:08.025264 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025278 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025291 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025303 | orchestrator | 2026-02-14 03:10:08.025316 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-14 03:10:08.025329 | orchestrator | Saturday 14 February 2026 03:09:29 +0000 (0:00:00.839) 0:05:03.888 ***** 2026-02-14 03:10:08.025341 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:10:08.025355 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:10:08.025367 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:10:08.025379 | orchestrator | 2026-02-14 03:10:08.025393 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-14 03:10:08.025406 | orchestrator | Saturday 14 February 2026 03:09:37 +0000 (0:00:08.163) 0:05:12.052 ***** 2026-02-14 03:10:08.025418 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025428 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025439 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025450 | orchestrator | 2026-02-14 03:10:08.025460 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-14 03:10:08.025471 | orchestrator | Saturday 14 February 2026 03:09:39 +0000 (0:00:01.221) 0:05:13.273 ***** 2026-02-14 03:10:08.025482 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:10:08.025493 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:10:08.025504 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:10:08.025515 | orchestrator | 2026-02-14 03:10:08.025526 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-14 03:10:08.025537 | orchestrator | Saturday 14 February 2026 03:09:50 +0000 (0:00:10.951) 0:05:24.225 ***** 2026-02-14 03:10:08.025548 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.025559 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.025570 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.025580 | orchestrator | 2026-02-14 03:10:08.025591 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-14 03:10:08.025602 | orchestrator | Saturday 14 February 2026 03:09:54 +0000 (0:00:04.733) 0:05:28.958 ***** 2026-02-14 03:10:08.025613 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:10:08.025624 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:10:08.025634 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:10:08.025645 | orchestrator | 2026-02-14 03:10:08.025656 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-14 03:10:08.025666 | orchestrator | Saturday 14 February 2026 03:10:02 +0000 (0:00:07.919) 0:05:36.878 ***** 2026-02-14 03:10:08.025689 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.025701 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.025711 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.025722 | orchestrator | 2026-02-14 03:10:08.025733 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-14 03:10:08.025744 | orchestrator | Saturday 14 February 2026 03:10:03 +0000 (0:00:00.694) 0:05:37.572 ***** 2026-02-14 03:10:08.025755 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.025765 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.025776 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.025787 | orchestrator | 2026-02-14 03:10:08.025855 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-14 03:10:08.025867 | orchestrator | Saturday 14 February 2026 03:10:03 +0000 (0:00:00.354) 0:05:37.926 ***** 2026-02-14 03:10:08.025878 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.025889 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.025900 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.025911 | orchestrator | 2026-02-14 03:10:08.025922 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-14 03:10:08.025933 | orchestrator | Saturday 14 February 2026 03:10:04 +0000 (0:00:00.381) 0:05:38.308 ***** 2026-02-14 03:10:08.025944 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.025955 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.025966 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.025978 | orchestrator | 2026-02-14 03:10:08.025988 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-14 03:10:08.025999 | orchestrator | Saturday 14 February 2026 03:10:04 +0000 (0:00:00.346) 0:05:38.655 ***** 2026-02-14 03:10:08.026010 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.026095 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.026108 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.026119 | orchestrator | 2026-02-14 03:10:08.026129 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-14 03:10:08.026140 | orchestrator | Saturday 14 February 2026 03:10:05 +0000 (0:00:00.680) 0:05:39.336 ***** 2026-02-14 03:10:08.026151 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:08.026162 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:08.026173 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:08.026184 | orchestrator | 2026-02-14 03:10:08.026194 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-14 03:10:08.026205 | orchestrator | Saturday 14 February 2026 03:10:05 +0000 (0:00:00.359) 0:05:39.695 ***** 2026-02-14 03:10:08.026216 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.026227 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.026238 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.026249 | orchestrator | 2026-02-14 03:10:08.026260 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-14 03:10:08.026271 | orchestrator | Saturday 14 February 2026 03:10:06 +0000 (0:00:00.876) 0:05:40.572 ***** 2026-02-14 03:10:08.026282 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:08.026293 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:08.026304 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:08.026314 | orchestrator | 2026-02-14 03:10:08.026325 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:10:08.026337 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-14 03:10:08.026350 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-14 03:10:08.026361 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-14 03:10:08.026372 | orchestrator | 2026-02-14 03:10:08.026391 | orchestrator | 2026-02-14 03:10:08.026402 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:10:08.026413 | orchestrator | Saturday 14 February 2026 03:10:07 +0000 (0:00:00.819) 0:05:41.392 ***** 2026-02-14 03:10:08.026424 | orchestrator | =============================================================================== 2026-02-14 03:10:08.026435 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.95s 2026-02-14 03:10:08.026446 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.16s 2026-02-14 03:10:08.026457 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.92s 2026-02-14 03:10:08.026467 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.24s 2026-02-14 03:10:08.026478 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.16s 2026-02-14 03:10:08.026489 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.73s 2026-02-14 03:10:08.026500 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.13s 2026-02-14 03:10:08.026511 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.09s 2026-02-14 03:10:08.026522 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.06s 2026-02-14 03:10:08.026532 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.80s 2026-02-14 03:10:08.026543 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.80s 2026-02-14 03:10:08.026554 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.65s 2026-02-14 03:10:08.026565 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.59s 2026-02-14 03:10:08.026575 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.44s 2026-02-14 03:10:08.026586 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.42s 2026-02-14 03:10:08.026597 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.31s 2026-02-14 03:10:08.026608 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.29s 2026-02-14 03:10:08.026619 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.29s 2026-02-14 03:10:08.026630 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.28s 2026-02-14 03:10:08.026641 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.23s 2026-02-14 03:10:10.360661 | orchestrator | 2026-02-14 03:10:10 | INFO  | Task 70e448c5-d389-4e3d-a9e1-8dd26b3fec4e (opensearch) was prepared for execution. 2026-02-14 03:10:10.360737 | orchestrator | 2026-02-14 03:10:10 | INFO  | It takes a moment until task 70e448c5-d389-4e3d-a9e1-8dd26b3fec4e (opensearch) has been started and output is visible here. 2026-02-14 03:10:21.042411 | orchestrator | 2026-02-14 03:10:21.042544 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:10:21.042560 | orchestrator | 2026-02-14 03:10:21.042571 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:10:21.042581 | orchestrator | Saturday 14 February 2026 03:10:14 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-02-14 03:10:21.042592 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:10:21.042603 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:10:21.042613 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:10:21.042623 | orchestrator | 2026-02-14 03:10:21.042633 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:10:21.042643 | orchestrator | Saturday 14 February 2026 03:10:14 +0000 (0:00:00.303) 0:00:00.567 ***** 2026-02-14 03:10:21.042670 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-14 03:10:21.042683 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-14 03:10:21.042700 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-14 03:10:21.042715 | orchestrator | 2026-02-14 03:10:21.042731 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-14 03:10:21.042775 | orchestrator | 2026-02-14 03:10:21.042795 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 03:10:21.042811 | orchestrator | Saturday 14 February 2026 03:10:15 +0000 (0:00:00.430) 0:00:00.997 ***** 2026-02-14 03:10:21.042860 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:10:21.042871 | orchestrator | 2026-02-14 03:10:21.042881 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-14 03:10:21.042891 | orchestrator | Saturday 14 February 2026 03:10:15 +0000 (0:00:00.505) 0:00:01.503 ***** 2026-02-14 03:10:21.042900 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 03:10:21.042910 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 03:10:21.042923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 03:10:21.042935 | orchestrator | 2026-02-14 03:10:21.042946 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-14 03:10:21.042957 | orchestrator | Saturday 14 February 2026 03:10:16 +0000 (0:00:00.678) 0:00:02.182 ***** 2026-02-14 03:10:21.042972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.042990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.043022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.043044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.043067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.043081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.043092 | orchestrator | 2026-02-14 03:10:21.043103 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 03:10:21.043114 | orchestrator | Saturday 14 February 2026 03:10:18 +0000 (0:00:01.617) 0:00:03.800 ***** 2026-02-14 03:10:21.043126 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:10:21.043137 | orchestrator | 2026-02-14 03:10:21.043148 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-14 03:10:21.043159 | orchestrator | Saturday 14 February 2026 03:10:18 +0000 (0:00:00.548) 0:00:04.348 ***** 2026-02-14 03:10:21.043181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.841704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.841808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:21.841886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.841901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.841988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:21.842004 | orchestrator | 2026-02-14 03:10:21.842077 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-14 03:10:21.842095 | orchestrator | Saturday 14 February 2026 03:10:21 +0000 (0:00:02.379) 0:00:06.727 ***** 2026-02-14 03:10:21.842108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:21.842121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:21.842133 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:21.842146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:21.842184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:22.895775 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:22.895931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:22.895954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:22.895969 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:22.895981 | orchestrator | 2026-02-14 03:10:22.895994 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-14 03:10:22.896007 | orchestrator | Saturday 14 February 2026 03:10:21 +0000 (0:00:00.801) 0:00:07.529 ***** 2026-02-14 03:10:22.896044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:22.896075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:22.896106 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:10:22.896119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:22.896160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:22.896174 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:10:22.896194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-14 03:10:22.896212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-14 03:10:22.896224 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:10:22.896235 | orchestrator | 2026-02-14 03:10:22.896247 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-14 03:10:22.896266 | orchestrator | Saturday 14 February 2026 03:10:22 +0000 (0:00:01.048) 0:00:08.577 ***** 2026-02-14 03:10:30.664543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:30.664640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:30.664652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:30.664694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:30.664720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:30.664730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:10:30.664744 | orchestrator | 2026-02-14 03:10:30.664753 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-14 03:10:30.664762 | orchestrator | Saturday 14 February 2026 03:10:25 +0000 (0:00:02.217) 0:00:10.795 ***** 2026-02-14 03:10:30.664770 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:10:30.664779 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:10:30.664786 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:10:30.664793 | orchestrator | 2026-02-14 03:10:30.664801 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-14 03:10:30.664808 | orchestrator | Saturday 14 February 2026 03:10:27 +0000 (0:00:02.191) 0:00:12.987 ***** 2026-02-14 03:10:30.664816 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:10:30.664823 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:10:30.664883 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:10:30.664892 | orchestrator | 2026-02-14 03:10:30.664899 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-14 03:10:30.664907 | orchestrator | Saturday 14 February 2026 03:10:29 +0000 (0:00:01.736) 0:00:14.723 ***** 2026-02-14 03:10:30.664914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:30.664927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:10:30.664941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-14 03:13:11.424597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:13:11.424728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:13:11.424756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-14 03:13:11.424768 | orchestrator | 2026-02-14 03:13:11.424779 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 03:13:11.424789 | orchestrator | Saturday 14 February 2026 03:10:30 +0000 (0:00:01.627) 0:00:16.351 ***** 2026-02-14 03:13:11.424798 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:13:11.424808 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:13:11.424817 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:13:11.424825 | orchestrator | 2026-02-14 03:13:11.424835 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 03:13:11.424844 | orchestrator | Saturday 14 February 2026 03:10:30 +0000 (0:00:00.291) 0:00:16.642 ***** 2026-02-14 03:13:11.424853 | orchestrator | 2026-02-14 03:13:11.424861 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 03:13:11.424870 | orchestrator | Saturday 14 February 2026 03:10:31 +0000 (0:00:00.066) 0:00:16.709 ***** 2026-02-14 03:13:11.424879 | orchestrator | 2026-02-14 03:13:11.424887 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 03:13:11.424903 | orchestrator | Saturday 14 February 2026 03:10:31 +0000 (0:00:00.065) 0:00:16.775 ***** 2026-02-14 03:13:11.424912 | orchestrator | 2026-02-14 03:13:11.424920 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-14 03:13:11.424944 | orchestrator | Saturday 14 February 2026 03:10:31 +0000 (0:00:00.064) 0:00:16.839 ***** 2026-02-14 03:13:11.424954 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:13:11.424962 | orchestrator | 2026-02-14 03:13:11.424971 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-14 03:13:11.424980 | orchestrator | Saturday 14 February 2026 03:10:31 +0000 (0:00:00.211) 0:00:17.051 ***** 2026-02-14 03:13:11.424988 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:13:11.424997 | orchestrator | 2026-02-14 03:13:11.425005 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-14 03:13:11.425014 | orchestrator | Saturday 14 February 2026 03:10:31 +0000 (0:00:00.649) 0:00:17.700 ***** 2026-02-14 03:13:11.425023 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:11.425031 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:11.425040 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:11.425048 | orchestrator | 2026-02-14 03:13:11.425057 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-14 03:13:11.425065 | orchestrator | Saturday 14 February 2026 03:11:39 +0000 (0:01:07.252) 0:01:24.953 ***** 2026-02-14 03:13:11.425074 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:11.425082 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:11.425091 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:11.425099 | orchestrator | 2026-02-14 03:13:11.425108 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 03:13:11.425149 | orchestrator | Saturday 14 February 2026 03:13:00 +0000 (0:01:21.489) 0:02:46.442 ***** 2026-02-14 03:13:11.425165 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:13:11.425181 | orchestrator | 2026-02-14 03:13:11.425196 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-14 03:13:11.425211 | orchestrator | Saturday 14 February 2026 03:13:01 +0000 (0:00:00.520) 0:02:46.963 ***** 2026-02-14 03:13:11.425223 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:13:11.425233 | orchestrator | 2026-02-14 03:13:11.425243 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-14 03:13:11.425253 | orchestrator | Saturday 14 February 2026 03:13:03 +0000 (0:00:02.586) 0:02:49.550 ***** 2026-02-14 03:13:11.425263 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:13:11.425272 | orchestrator | 2026-02-14 03:13:11.425282 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-14 03:13:11.425292 | orchestrator | Saturday 14 February 2026 03:13:06 +0000 (0:00:02.258) 0:02:51.808 ***** 2026-02-14 03:13:11.425302 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:11.425312 | orchestrator | 2026-02-14 03:13:11.425321 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-14 03:13:11.425331 | orchestrator | Saturday 14 February 2026 03:13:08 +0000 (0:00:02.710) 0:02:54.519 ***** 2026-02-14 03:13:11.425341 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:11.425351 | orchestrator | 2026-02-14 03:13:11.425360 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:13:11.425371 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:13:11.425383 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 03:13:11.425398 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 03:13:11.425409 | orchestrator | 2026-02-14 03:13:11.425419 | orchestrator | 2026-02-14 03:13:11.425435 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:13:11.425445 | orchestrator | Saturday 14 February 2026 03:13:11 +0000 (0:00:02.574) 0:02:57.093 ***** 2026-02-14 03:13:11.425455 | orchestrator | =============================================================================== 2026-02-14 03:13:11.425465 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.49s 2026-02-14 03:13:11.425474 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.25s 2026-02-14 03:13:11.425484 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.71s 2026-02-14 03:13:11.425494 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.59s 2026-02-14 03:13:11.425502 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-02-14 03:13:11.425511 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.38s 2026-02-14 03:13:11.425519 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.26s 2026-02-14 03:13:11.425528 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.22s 2026-02-14 03:13:11.425536 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.19s 2026-02-14 03:13:11.425545 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.74s 2026-02-14 03:13:11.425553 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.63s 2026-02-14 03:13:11.425562 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.62s 2026-02-14 03:13:11.425570 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.05s 2026-02-14 03:13:11.425579 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.80s 2026-02-14 03:13:11.425587 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-02-14 03:13:11.425596 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2026-02-14 03:13:11.425611 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-14 03:13:11.744429 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-14 03:13:11.744528 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-02-14 03:13:11.744545 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-14 03:13:14.091718 | orchestrator | 2026-02-14 03:13:14 | INFO  | Task 67cfe652-941d-416c-a3ac-91080678d440 (memcached) was prepared for execution. 2026-02-14 03:13:14.091823 | orchestrator | 2026-02-14 03:13:14 | INFO  | It takes a moment until task 67cfe652-941d-416c-a3ac-91080678d440 (memcached) has been started and output is visible here. 2026-02-14 03:13:25.576020 | orchestrator | 2026-02-14 03:13:25.576172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:13:25.576194 | orchestrator | 2026-02-14 03:13:25.576205 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:13:25.576218 | orchestrator | Saturday 14 February 2026 03:13:18 +0000 (0:00:00.246) 0:00:00.246 ***** 2026-02-14 03:13:25.576230 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:13:25.576242 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:13:25.576253 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:13:25.576264 | orchestrator | 2026-02-14 03:13:25.576275 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:13:25.576286 | orchestrator | Saturday 14 February 2026 03:13:18 +0000 (0:00:00.287) 0:00:00.534 ***** 2026-02-14 03:13:25.576298 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-14 03:13:25.576309 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-14 03:13:25.576320 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-14 03:13:25.576332 | orchestrator | 2026-02-14 03:13:25.576343 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-14 03:13:25.576380 | orchestrator | 2026-02-14 03:13:25.576391 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-14 03:13:25.576403 | orchestrator | Saturday 14 February 2026 03:13:18 +0000 (0:00:00.408) 0:00:00.942 ***** 2026-02-14 03:13:25.576415 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:13:25.576428 | orchestrator | 2026-02-14 03:13:25.576440 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-14 03:13:25.576451 | orchestrator | Saturday 14 February 2026 03:13:19 +0000 (0:00:00.476) 0:00:01.419 ***** 2026-02-14 03:13:25.576463 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-14 03:13:25.576473 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-14 03:13:25.576482 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-14 03:13:25.576493 | orchestrator | 2026-02-14 03:13:25.576504 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-14 03:13:25.576515 | orchestrator | Saturday 14 February 2026 03:13:20 +0000 (0:00:00.659) 0:00:02.079 ***** 2026-02-14 03:13:25.576526 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-14 03:13:25.576537 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-14 03:13:25.576548 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-14 03:13:25.576559 | orchestrator | 2026-02-14 03:13:25.576570 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-14 03:13:25.576581 | orchestrator | Saturday 14 February 2026 03:13:21 +0000 (0:00:01.678) 0:00:03.757 ***** 2026-02-14 03:13:25.576606 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:25.576614 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:25.576621 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:25.576627 | orchestrator | 2026-02-14 03:13:25.576634 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-14 03:13:25.576641 | orchestrator | Saturday 14 February 2026 03:13:23 +0000 (0:00:01.406) 0:00:05.164 ***** 2026-02-14 03:13:25.576648 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:25.576654 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:25.576661 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:25.576668 | orchestrator | 2026-02-14 03:13:25.576674 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:13:25.576681 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:25.576690 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:25.576696 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:25.576703 | orchestrator | 2026-02-14 03:13:25.576710 | orchestrator | 2026-02-14 03:13:25.576716 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:13:25.576723 | orchestrator | Saturday 14 February 2026 03:13:25 +0000 (0:00:02.077) 0:00:07.241 ***** 2026-02-14 03:13:25.576729 | orchestrator | =============================================================================== 2026-02-14 03:13:25.576735 | orchestrator | memcached : Restart memcached container --------------------------------- 2.08s 2026-02-14 03:13:25.576741 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.68s 2026-02-14 03:13:25.576748 | orchestrator | memcached : Check memcached container ----------------------------------- 1.41s 2026-02-14 03:13:25.576754 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.66s 2026-02-14 03:13:25.576760 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.48s 2026-02-14 03:13:25.576767 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-02-14 03:13:25.576773 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-02-14 03:13:27.934413 | orchestrator | 2026-02-14 03:13:27 | INFO  | Task dd90ac1f-0300-4469-bf9e-ce155c9fe8b8 (redis) was prepared for execution. 2026-02-14 03:13:27.934485 | orchestrator | 2026-02-14 03:13:27 | INFO  | It takes a moment until task dd90ac1f-0300-4469-bf9e-ce155c9fe8b8 (redis) has been started and output is visible here. 2026-02-14 03:13:36.727791 | orchestrator | 2026-02-14 03:13:36.727902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:13:36.727917 | orchestrator | 2026-02-14 03:13:36.727928 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:13:36.727939 | orchestrator | Saturday 14 February 2026 03:13:32 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-14 03:13:36.727949 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:13:36.727961 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:13:36.727970 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:13:36.727980 | orchestrator | 2026-02-14 03:13:36.727990 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:13:36.727999 | orchestrator | Saturday 14 February 2026 03:13:32 +0000 (0:00:00.295) 0:00:00.552 ***** 2026-02-14 03:13:36.728009 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-14 03:13:36.728019 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-14 03:13:36.728029 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-14 03:13:36.728038 | orchestrator | 2026-02-14 03:13:36.728048 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-14 03:13:36.728058 | orchestrator | 2026-02-14 03:13:36.728067 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-14 03:13:36.728077 | orchestrator | Saturday 14 February 2026 03:13:32 +0000 (0:00:00.411) 0:00:00.964 ***** 2026-02-14 03:13:36.728087 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:13:36.728097 | orchestrator | 2026-02-14 03:13:36.728107 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-14 03:13:36.728117 | orchestrator | Saturday 14 February 2026 03:13:33 +0000 (0:00:00.448) 0:00:01.412 ***** 2026-02-14 03:13:36.728130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728287 | orchestrator | 2026-02-14 03:13:36.728298 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-14 03:13:36.728308 | orchestrator | Saturday 14 February 2026 03:13:34 +0000 (0:00:01.067) 0:00:02.480 ***** 2026-02-14 03:13:36.728318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:36.728476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703703 | orchestrator | 2026-02-14 03:13:40.703714 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-14 03:13:40.703723 | orchestrator | Saturday 14 February 2026 03:13:36 +0000 (0:00:02.419) 0:00:04.899 ***** 2026-02-14 03:13:40.703732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703827 | orchestrator | 2026-02-14 03:13:40.703835 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-14 03:13:40.703842 | orchestrator | Saturday 14 February 2026 03:13:39 +0000 (0:00:02.327) 0:00:07.226 ***** 2026-02-14 03:13:40.703850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:40.703904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 03:13:50.732240 | orchestrator | 2026-02-14 03:13:50.732360 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 03:13:50.732377 | orchestrator | Saturday 14 February 2026 03:13:40 +0000 (0:00:01.441) 0:00:08.668 ***** 2026-02-14 03:13:50.732389 | orchestrator | 2026-02-14 03:13:50.732401 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 03:13:50.732412 | orchestrator | Saturday 14 February 2026 03:13:40 +0000 (0:00:00.064) 0:00:08.732 ***** 2026-02-14 03:13:50.732423 | orchestrator | 2026-02-14 03:13:50.732434 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 03:13:50.732445 | orchestrator | Saturday 14 February 2026 03:13:40 +0000 (0:00:00.076) 0:00:08.809 ***** 2026-02-14 03:13:50.732456 | orchestrator | 2026-02-14 03:13:50.732467 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-14 03:13:50.732478 | orchestrator | Saturday 14 February 2026 03:13:40 +0000 (0:00:00.065) 0:00:08.874 ***** 2026-02-14 03:13:50.732491 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:50.732510 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:50.732529 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:50.732546 | orchestrator | 2026-02-14 03:13:50.732564 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-14 03:13:50.732583 | orchestrator | Saturday 14 February 2026 03:13:47 +0000 (0:00:06.618) 0:00:15.493 ***** 2026-02-14 03:13:50.732618 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:13:50.732631 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:13:50.732650 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:13:50.732670 | orchestrator | 2026-02-14 03:13:50.732682 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:13:50.732693 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:50.732706 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:50.732737 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:13:50.732757 | orchestrator | 2026-02-14 03:13:50.732776 | orchestrator | 2026-02-14 03:13:50.732794 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:13:50.732810 | orchestrator | Saturday 14 February 2026 03:13:50 +0000 (0:00:03.091) 0:00:18.584 ***** 2026-02-14 03:13:50.732826 | orchestrator | =============================================================================== 2026-02-14 03:13:50.732844 | orchestrator | redis : Restart redis container ----------------------------------------- 6.62s 2026-02-14 03:13:50.732864 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.09s 2026-02-14 03:13:50.732883 | orchestrator | redis : Copying over default config.json files -------------------------- 2.42s 2026-02-14 03:13:50.732895 | orchestrator | redis : Copying over redis config files --------------------------------- 2.33s 2026-02-14 03:13:50.732906 | orchestrator | redis : Check redis containers ------------------------------------------ 1.44s 2026-02-14 03:13:50.732923 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.07s 2026-02-14 03:13:50.732942 | orchestrator | redis : include_tasks --------------------------------------------------- 0.45s 2026-02-14 03:13:50.732961 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-02-14 03:13:50.732980 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-14 03:13:50.732991 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-02-14 03:13:53.015445 | orchestrator | 2026-02-14 03:13:53 | INFO  | Task f5e3ba2d-c13b-4bfa-98db-418562570ff0 (mariadb) was prepared for execution. 2026-02-14 03:13:53.015570 | orchestrator | 2026-02-14 03:13:53 | INFO  | It takes a moment until task f5e3ba2d-c13b-4bfa-98db-418562570ff0 (mariadb) has been started and output is visible here. 2026-02-14 03:14:06.168068 | orchestrator | 2026-02-14 03:14:06.168183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:14:06.168200 | orchestrator | 2026-02-14 03:14:06.168253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:14:06.168266 | orchestrator | Saturday 14 February 2026 03:13:57 +0000 (0:00:00.162) 0:00:00.162 ***** 2026-02-14 03:14:06.168278 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:14:06.168291 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:14:06.168302 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:14:06.168313 | orchestrator | 2026-02-14 03:14:06.168324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:14:06.168337 | orchestrator | Saturday 14 February 2026 03:13:57 +0000 (0:00:00.309) 0:00:00.472 ***** 2026-02-14 03:14:06.168348 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-14 03:14:06.168360 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-14 03:14:06.168371 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-14 03:14:06.168382 | orchestrator | 2026-02-14 03:14:06.168393 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-14 03:14:06.168404 | orchestrator | 2026-02-14 03:14:06.168414 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-14 03:14:06.168447 | orchestrator | Saturday 14 February 2026 03:13:57 +0000 (0:00:00.547) 0:00:01.020 ***** 2026-02-14 03:14:06.168459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 03:14:06.168470 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 03:14:06.168481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 03:14:06.168492 | orchestrator | 2026-02-14 03:14:06.168507 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 03:14:06.168525 | orchestrator | Saturday 14 February 2026 03:13:58 +0000 (0:00:00.354) 0:00:01.375 ***** 2026-02-14 03:14:06.168543 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:14:06.168562 | orchestrator | 2026-02-14 03:14:06.168581 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-14 03:14:06.168601 | orchestrator | Saturday 14 February 2026 03:13:58 +0000 (0:00:00.503) 0:00:01.879 ***** 2026-02-14 03:14:06.168647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:06.168688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:06.168718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:06.168731 | orchestrator | 2026-02-14 03:14:06.168742 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-14 03:14:06.168754 | orchestrator | Saturday 14 February 2026 03:14:01 +0000 (0:00:02.447) 0:00:04.326 ***** 2026-02-14 03:14:06.168765 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:06.168777 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:14:06.168788 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:06.168798 | orchestrator | 2026-02-14 03:14:06.168809 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-14 03:14:06.168820 | orchestrator | Saturday 14 February 2026 03:14:01 +0000 (0:00:00.629) 0:00:04.956 ***** 2026-02-14 03:14:06.168831 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:06.168842 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:06.168853 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:14:06.168863 | orchestrator | 2026-02-14 03:14:06.168874 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-14 03:14:06.168885 | orchestrator | Saturday 14 February 2026 03:14:03 +0000 (0:00:01.357) 0:00:06.314 ***** 2026-02-14 03:14:06.168906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:13.505728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:13.505825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:13.505858 | orchestrator | 2026-02-14 03:14:13.505869 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-14 03:14:13.505879 | orchestrator | Saturday 14 February 2026 03:14:06 +0000 (0:00:02.891) 0:00:09.205 ***** 2026-02-14 03:14:13.505888 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:13.505897 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:13.505905 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:14:13.505913 | orchestrator | 2026-02-14 03:14:13.505921 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-14 03:14:13.505943 | orchestrator | Saturday 14 February 2026 03:14:07 +0000 (0:00:01.101) 0:00:10.306 ***** 2026-02-14 03:14:13.505951 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:14:13.505959 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:14:13.505967 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:14:13.505975 | orchestrator | 2026-02-14 03:14:13.505983 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 03:14:13.505991 | orchestrator | Saturday 14 February 2026 03:14:10 +0000 (0:00:03.512) 0:00:13.819 ***** 2026-02-14 03:14:13.506000 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:14:13.506008 | orchestrator | 2026-02-14 03:14:13.506069 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-14 03:14:13.506080 | orchestrator | Saturday 14 February 2026 03:14:11 +0000 (0:00:00.544) 0:00:14.364 ***** 2026-02-14 03:14:13.506095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:13.506113 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:14:13.506129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:17.994195 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:17.994382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:17.994425 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:17.994438 | orchestrator | 2026-02-14 03:14:17.994450 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-14 03:14:17.994463 | orchestrator | Saturday 14 February 2026 03:14:13 +0000 (0:00:02.181) 0:00:16.546 ***** 2026-02-14 03:14:17.994476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:17.994488 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:14:17.994527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:17.994549 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:17.994561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:17.994573 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:17.994584 | orchestrator | 2026-02-14 03:14:17.994595 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-14 03:14:17.994606 | orchestrator | Saturday 14 February 2026 03:14:15 +0000 (0:00:02.318) 0:00:18.864 ***** 2026-02-14 03:14:17.994634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:20.668430 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:14:20.668565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:20.668595 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:14:20.668634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 03:14:20.668679 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:14:20.668698 | orchestrator | 2026-02-14 03:14:20.668717 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-14 03:14:20.668737 | orchestrator | Saturday 14 February 2026 03:14:17 +0000 (0:00:02.171) 0:00:21.036 ***** 2026-02-14 03:14:20.668780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:20.668803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:14:20.668844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 03:16:33.680759 | orchestrator | 2026-02-14 03:16:33.680913 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-14 03:16:33.680944 | orchestrator | Saturday 14 February 2026 03:14:20 +0000 (0:00:02.672) 0:00:23.708 ***** 2026-02-14 03:16:33.680966 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:33.680986 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:16:33.681005 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:16:33.681024 | orchestrator | 2026-02-14 03:16:33.681043 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-14 03:16:33.681062 | orchestrator | Saturday 14 February 2026 03:14:21 +0000 (0:00:00.781) 0:00:24.490 ***** 2026-02-14 03:16:33.681080 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.681099 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.681118 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.681138 | orchestrator | 2026-02-14 03:16:33.681157 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-14 03:16:33.681176 | orchestrator | Saturday 14 February 2026 03:14:21 +0000 (0:00:00.524) 0:00:25.014 ***** 2026-02-14 03:16:33.681195 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.681232 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.681250 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.681269 | orchestrator | 2026-02-14 03:16:33.681289 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-14 03:16:33.681308 | orchestrator | Saturday 14 February 2026 03:14:22 +0000 (0:00:00.316) 0:00:25.331 ***** 2026-02-14 03:16:33.681329 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-14 03:16:33.681350 | orchestrator | ...ignoring 2026-02-14 03:16:33.681370 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-14 03:16:33.681391 | orchestrator | ...ignoring 2026-02-14 03:16:33.681412 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-14 03:16:33.681432 | orchestrator | ...ignoring 2026-02-14 03:16:33.681542 | orchestrator | 2026-02-14 03:16:33.681565 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-14 03:16:33.681584 | orchestrator | Saturday 14 February 2026 03:14:33 +0000 (0:00:10.836) 0:00:36.167 ***** 2026-02-14 03:16:33.681603 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.681622 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.681639 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.681657 | orchestrator | 2026-02-14 03:16:33.681675 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-14 03:16:33.681694 | orchestrator | Saturday 14 February 2026 03:14:33 +0000 (0:00:00.388) 0:00:36.556 ***** 2026-02-14 03:16:33.681713 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.681731 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.681749 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.681767 | orchestrator | 2026-02-14 03:16:33.681786 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-14 03:16:33.681805 | orchestrator | Saturday 14 February 2026 03:14:34 +0000 (0:00:00.620) 0:00:37.176 ***** 2026-02-14 03:16:33.681824 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.681842 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.681861 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.681881 | orchestrator | 2026-02-14 03:16:33.681918 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-14 03:16:33.681938 | orchestrator | Saturday 14 February 2026 03:14:34 +0000 (0:00:00.440) 0:00:37.616 ***** 2026-02-14 03:16:33.681956 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.681973 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.681991 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.682007 | orchestrator | 2026-02-14 03:16:33.682116 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-14 03:16:33.682136 | orchestrator | Saturday 14 February 2026 03:14:34 +0000 (0:00:00.419) 0:00:38.035 ***** 2026-02-14 03:16:33.682155 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.682172 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.682190 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.682208 | orchestrator | 2026-02-14 03:16:33.682227 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-14 03:16:33.682248 | orchestrator | Saturday 14 February 2026 03:14:35 +0000 (0:00:00.443) 0:00:38.479 ***** 2026-02-14 03:16:33.682267 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.682286 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.682304 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.682322 | orchestrator | 2026-02-14 03:16:33.682340 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 03:16:33.682358 | orchestrator | Saturday 14 February 2026 03:14:36 +0000 (0:00:00.788) 0:00:39.268 ***** 2026-02-14 03:16:33.682377 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.682396 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.682414 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-14 03:16:33.682432 | orchestrator | 2026-02-14 03:16:33.682471 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-14 03:16:33.682484 | orchestrator | Saturday 14 February 2026 03:14:36 +0000 (0:00:00.366) 0:00:39.634 ***** 2026-02-14 03:16:33.682494 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:33.682506 | orchestrator | 2026-02-14 03:16:33.682516 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-14 03:16:33.682528 | orchestrator | Saturday 14 February 2026 03:14:46 +0000 (0:00:10.229) 0:00:49.864 ***** 2026-02-14 03:16:33.682539 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.682551 | orchestrator | 2026-02-14 03:16:33.682570 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 03:16:33.682590 | orchestrator | Saturday 14 February 2026 03:14:46 +0000 (0:00:00.130) 0:00:49.994 ***** 2026-02-14 03:16:33.682608 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.682673 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.682694 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.682712 | orchestrator | 2026-02-14 03:16:33.682733 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-14 03:16:33.682752 | orchestrator | Saturday 14 February 2026 03:14:47 +0000 (0:00:00.956) 0:00:50.951 ***** 2026-02-14 03:16:33.682767 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:33.682778 | orchestrator | 2026-02-14 03:16:33.682789 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-14 03:16:33.682818 | orchestrator | Saturday 14 February 2026 03:14:55 +0000 (0:00:07.529) 0:00:58.481 ***** 2026-02-14 03:16:33.682829 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.682840 | orchestrator | 2026-02-14 03:16:33.682851 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-14 03:16:33.682862 | orchestrator | Saturday 14 February 2026 03:14:57 +0000 (0:00:01.655) 0:01:00.136 ***** 2026-02-14 03:16:33.682873 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.682884 | orchestrator | 2026-02-14 03:16:33.682895 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-14 03:16:33.682906 | orchestrator | Saturday 14 February 2026 03:14:59 +0000 (0:00:02.596) 0:01:02.733 ***** 2026-02-14 03:16:33.682917 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:33.682931 | orchestrator | 2026-02-14 03:16:33.682951 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-14 03:16:33.682969 | orchestrator | Saturday 14 February 2026 03:14:59 +0000 (0:00:00.118) 0:01:02.852 ***** 2026-02-14 03:16:33.682988 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.683008 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:33.683029 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:33.683050 | orchestrator | 2026-02-14 03:16:33.683070 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-14 03:16:33.683083 | orchestrator | Saturday 14 February 2026 03:15:00 +0000 (0:00:00.309) 0:01:03.162 ***** 2026-02-14 03:16:33.683094 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:33.683104 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-14 03:16:33.683115 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:16:33.683126 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:16:33.683137 | orchestrator | 2026-02-14 03:16:33.683148 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-14 03:16:33.683159 | orchestrator | skipping: no hosts matched 2026-02-14 03:16:33.683170 | orchestrator | 2026-02-14 03:16:33.683181 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-14 03:16:33.683192 | orchestrator | 2026-02-14 03:16:33.683202 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 03:16:33.683213 | orchestrator | Saturday 14 February 2026 03:15:00 +0000 (0:00:00.531) 0:01:03.693 ***** 2026-02-14 03:16:33.683224 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:16:33.683235 | orchestrator | 2026-02-14 03:16:33.683246 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 03:16:33.683256 | orchestrator | Saturday 14 February 2026 03:15:17 +0000 (0:00:17.146) 0:01:20.839 ***** 2026-02-14 03:16:33.683266 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.683275 | orchestrator | 2026-02-14 03:16:33.683285 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 03:16:33.683294 | orchestrator | Saturday 14 February 2026 03:15:34 +0000 (0:00:16.580) 0:01:37.420 ***** 2026-02-14 03:16:33.683309 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:33.683326 | orchestrator | 2026-02-14 03:16:33.683347 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-14 03:16:33.683365 | orchestrator | 2026-02-14 03:16:33.683392 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 03:16:33.683409 | orchestrator | Saturday 14 February 2026 03:15:36 +0000 (0:00:02.312) 0:01:39.732 ***** 2026-02-14 03:16:33.683433 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:16:33.683497 | orchestrator | 2026-02-14 03:16:33.683508 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 03:16:33.683518 | orchestrator | Saturday 14 February 2026 03:15:54 +0000 (0:00:17.602) 0:01:57.334 ***** 2026-02-14 03:16:33.683528 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.683538 | orchestrator | 2026-02-14 03:16:33.683548 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 03:16:33.683557 | orchestrator | Saturday 14 February 2026 03:16:10 +0000 (0:00:16.569) 0:02:13.904 ***** 2026-02-14 03:16:33.683567 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:33.683577 | orchestrator | 2026-02-14 03:16:33.683586 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-14 03:16:33.683596 | orchestrator | 2026-02-14 03:16:33.683606 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 03:16:33.683615 | orchestrator | Saturday 14 February 2026 03:16:13 +0000 (0:00:02.539) 0:02:16.443 ***** 2026-02-14 03:16:33.683625 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:33.683635 | orchestrator | 2026-02-14 03:16:33.683644 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 03:16:33.683654 | orchestrator | Saturday 14 February 2026 03:16:29 +0000 (0:00:16.480) 0:02:32.924 ***** 2026-02-14 03:16:33.683669 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.683686 | orchestrator | 2026-02-14 03:16:33.683702 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 03:16:33.683718 | orchestrator | Saturday 14 February 2026 03:16:30 +0000 (0:00:00.580) 0:02:33.505 ***** 2026-02-14 03:16:33.683735 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:33.683751 | orchestrator | 2026-02-14 03:16:33.683766 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-14 03:16:33.683777 | orchestrator | 2026-02-14 03:16:33.683786 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-14 03:16:33.683796 | orchestrator | Saturday 14 February 2026 03:16:33 +0000 (0:00:02.710) 0:02:36.216 ***** 2026-02-14 03:16:33.683806 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:16:33.683816 | orchestrator | 2026-02-14 03:16:33.683826 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-14 03:16:33.683845 | orchestrator | Saturday 14 February 2026 03:16:33 +0000 (0:00:00.502) 0:02:36.718 ***** 2026-02-14 03:16:46.092694 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:46.092816 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:46.092832 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:46.092845 | orchestrator | 2026-02-14 03:16:46.092858 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-14 03:16:46.092870 | orchestrator | Saturday 14 February 2026 03:16:35 +0000 (0:00:02.225) 0:02:38.944 ***** 2026-02-14 03:16:46.092881 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:46.092892 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:46.092903 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:46.092914 | orchestrator | 2026-02-14 03:16:46.092925 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-14 03:16:46.092937 | orchestrator | Saturday 14 February 2026 03:16:38 +0000 (0:00:02.138) 0:02:41.082 ***** 2026-02-14 03:16:46.092948 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:46.092959 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:46.092970 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:46.092981 | orchestrator | 2026-02-14 03:16:46.092992 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-14 03:16:46.093003 | orchestrator | Saturday 14 February 2026 03:16:40 +0000 (0:00:02.347) 0:02:43.429 ***** 2026-02-14 03:16:46.093014 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:46.093025 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:46.093036 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:16:46.093047 | orchestrator | 2026-02-14 03:16:46.093083 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-14 03:16:46.093095 | orchestrator | Saturday 14 February 2026 03:16:42 +0000 (0:00:02.114) 0:02:45.544 ***** 2026-02-14 03:16:46.093106 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:16:46.093118 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:16:46.093129 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:16:46.093139 | orchestrator | 2026-02-14 03:16:46.093150 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-14 03:16:46.093161 | orchestrator | Saturday 14 February 2026 03:16:45 +0000 (0:00:02.895) 0:02:48.440 ***** 2026-02-14 03:16:46.093172 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:16:46.093183 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:16:46.093194 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:16:46.093205 | orchestrator | 2026-02-14 03:16:46.093216 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:16:46.093228 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-14 03:16:46.093240 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-14 03:16:46.093252 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-14 03:16:46.093263 | orchestrator | 2026-02-14 03:16:46.093273 | orchestrator | 2026-02-14 03:16:46.093284 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:16:46.093295 | orchestrator | Saturday 14 February 2026 03:16:45 +0000 (0:00:00.381) 0:02:48.821 ***** 2026-02-14 03:16:46.093306 | orchestrator | =============================================================================== 2026-02-14 03:16:46.093331 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.75s 2026-02-14 03:16:46.093343 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.15s 2026-02-14 03:16:46.093354 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.48s 2026-02-14 03:16:46.093365 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2026-02-14 03:16:46.093376 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.23s 2026-02-14 03:16:46.093387 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.53s 2026-02-14 03:16:46.093398 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.85s 2026-02-14 03:16:46.093409 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.51s 2026-02-14 03:16:46.093420 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2026-02-14 03:16:46.093431 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.89s 2026-02-14 03:16:46.093442 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.71s 2026-02-14 03:16:46.093452 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.67s 2026-02-14 03:16:46.093512 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.60s 2026-02-14 03:16:46.093524 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.45s 2026-02-14 03:16:46.093535 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.35s 2026-02-14 03:16:46.093546 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.32s 2026-02-14 03:16:46.093558 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.23s 2026-02-14 03:16:46.093569 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.18s 2026-02-14 03:16:46.093580 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.17s 2026-02-14 03:16:46.093591 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.14s 2026-02-14 03:16:48.386872 | orchestrator | 2026-02-14 03:16:48 | INFO  | Task d7aa0326-43a4-4268-b544-57ab3faaf1ed (rabbitmq) was prepared for execution. 2026-02-14 03:16:48.386943 | orchestrator | 2026-02-14 03:16:48 | INFO  | It takes a moment until task d7aa0326-43a4-4268-b544-57ab3faaf1ed (rabbitmq) has been started and output is visible here. 2026-02-14 03:17:01.312096 | orchestrator | 2026-02-14 03:17:01.312207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:17:01.312224 | orchestrator | 2026-02-14 03:17:01.312236 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:17:01.312247 | orchestrator | Saturday 14 February 2026 03:16:52 +0000 (0:00:00.172) 0:00:00.172 ***** 2026-02-14 03:17:01.312259 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:17:01.312271 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:17:01.312282 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:17:01.312293 | orchestrator | 2026-02-14 03:17:01.312304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:17:01.312316 | orchestrator | Saturday 14 February 2026 03:16:52 +0000 (0:00:00.301) 0:00:00.473 ***** 2026-02-14 03:17:01.312327 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-14 03:17:01.312338 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-14 03:17:01.312349 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-14 03:17:01.312360 | orchestrator | 2026-02-14 03:17:01.312371 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-14 03:17:01.312383 | orchestrator | 2026-02-14 03:17:01.312395 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 03:17:01.312406 | orchestrator | Saturday 14 February 2026 03:16:53 +0000 (0:00:00.533) 0:00:01.007 ***** 2026-02-14 03:17:01.312418 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:17:01.312430 | orchestrator | 2026-02-14 03:17:01.312441 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-14 03:17:01.312452 | orchestrator | Saturday 14 February 2026 03:16:53 +0000 (0:00:00.512) 0:00:01.519 ***** 2026-02-14 03:17:01.312463 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:17:01.312475 | orchestrator | 2026-02-14 03:17:01.312508 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-14 03:17:01.312520 | orchestrator | Saturday 14 February 2026 03:16:54 +0000 (0:00:00.968) 0:00:02.488 ***** 2026-02-14 03:17:01.312531 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312544 | orchestrator | 2026-02-14 03:17:01.312555 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-14 03:17:01.312566 | orchestrator | Saturday 14 February 2026 03:16:55 +0000 (0:00:00.355) 0:00:02.844 ***** 2026-02-14 03:17:01.312577 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312588 | orchestrator | 2026-02-14 03:17:01.312600 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-14 03:17:01.312611 | orchestrator | Saturday 14 February 2026 03:16:55 +0000 (0:00:00.360) 0:00:03.204 ***** 2026-02-14 03:17:01.312625 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312639 | orchestrator | 2026-02-14 03:17:01.312652 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-14 03:17:01.312664 | orchestrator | Saturday 14 February 2026 03:16:55 +0000 (0:00:00.364) 0:00:03.568 ***** 2026-02-14 03:17:01.312677 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312689 | orchestrator | 2026-02-14 03:17:01.312702 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 03:17:01.312715 | orchestrator | Saturday 14 February 2026 03:16:56 +0000 (0:00:00.570) 0:00:04.138 ***** 2026-02-14 03:17:01.312745 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:17:01.312782 | orchestrator | 2026-02-14 03:17:01.312796 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-14 03:17:01.312808 | orchestrator | Saturday 14 February 2026 03:16:57 +0000 (0:00:00.839) 0:00:04.977 ***** 2026-02-14 03:17:01.312820 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:17:01.312834 | orchestrator | 2026-02-14 03:17:01.312846 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-14 03:17:01.312859 | orchestrator | Saturday 14 February 2026 03:16:58 +0000 (0:00:00.863) 0:00:05.841 ***** 2026-02-14 03:17:01.312872 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312885 | orchestrator | 2026-02-14 03:17:01.312898 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-14 03:17:01.312911 | orchestrator | Saturday 14 February 2026 03:16:58 +0000 (0:00:00.373) 0:00:06.215 ***** 2026-02-14 03:17:01.312924 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:01.312936 | orchestrator | 2026-02-14 03:17:01.312948 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-14 03:17:01.312961 | orchestrator | Saturday 14 February 2026 03:16:58 +0000 (0:00:00.366) 0:00:06.582 ***** 2026-02-14 03:17:01.313000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:01.313017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:01.313031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:01.313051 | orchestrator | 2026-02-14 03:17:01.313068 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-14 03:17:01.313080 | orchestrator | Saturday 14 February 2026 03:16:59 +0000 (0:00:00.783) 0:00:07.365 ***** 2026-02-14 03:17:01.313092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:01.313113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:19.339350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:19.339433 | orchestrator | 2026-02-14 03:17:19.339443 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-14 03:17:19.339450 | orchestrator | Saturday 14 February 2026 03:17:01 +0000 (0:00:01.571) 0:00:08.937 ***** 2026-02-14 03:17:19.339473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 03:17:19.339480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 03:17:19.339485 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 03:17:19.339490 | orchestrator | 2026-02-14 03:17:19.339496 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-14 03:17:19.339501 | orchestrator | Saturday 14 February 2026 03:17:02 +0000 (0:00:01.372) 0:00:10.310 ***** 2026-02-14 03:17:19.339550 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 03:17:19.339558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 03:17:19.339563 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 03:17:19.339568 | orchestrator | 2026-02-14 03:17:19.339573 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-14 03:17:19.339578 | orchestrator | Saturday 14 February 2026 03:17:04 +0000 (0:00:01.625) 0:00:11.935 ***** 2026-02-14 03:17:19.339583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 03:17:19.339588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 03:17:19.339593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 03:17:19.339598 | orchestrator | 2026-02-14 03:17:19.339603 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-14 03:17:19.339608 | orchestrator | Saturday 14 February 2026 03:17:05 +0000 (0:00:01.315) 0:00:13.251 ***** 2026-02-14 03:17:19.339613 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 03:17:19.339618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 03:17:19.339623 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 03:17:19.339628 | orchestrator | 2026-02-14 03:17:19.339633 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-14 03:17:19.339638 | orchestrator | Saturday 14 February 2026 03:17:07 +0000 (0:00:01.581) 0:00:14.832 ***** 2026-02-14 03:17:19.339643 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 03:17:19.339648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 03:17:19.339653 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 03:17:19.339658 | orchestrator | 2026-02-14 03:17:19.339663 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-14 03:17:19.339668 | orchestrator | Saturday 14 February 2026 03:17:08 +0000 (0:00:01.414) 0:00:16.247 ***** 2026-02-14 03:17:19.339673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 03:17:19.339678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 03:17:19.339683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 03:17:19.339688 | orchestrator | 2026-02-14 03:17:19.339693 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 03:17:19.339698 | orchestrator | Saturday 14 February 2026 03:17:09 +0000 (0:00:01.308) 0:00:17.556 ***** 2026-02-14 03:17:19.339704 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:17:19.339710 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:17:19.339727 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:17:19.339737 | orchestrator | 2026-02-14 03:17:19.339742 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-14 03:17:19.339747 | orchestrator | Saturday 14 February 2026 03:17:10 +0000 (0:00:00.378) 0:00:17.934 ***** 2026-02-14 03:17:19.339754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:19.339764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:19.339770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 03:17:19.339776 | orchestrator | 2026-02-14 03:17:19.339781 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-14 03:17:19.339786 | orchestrator | Saturday 14 February 2026 03:17:11 +0000 (0:00:01.236) 0:00:19.170 ***** 2026-02-14 03:17:19.339792 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:17:19.339797 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:17:19.339802 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:17:19.339807 | orchestrator | 2026-02-14 03:17:19.339812 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-14 03:17:19.339821 | orchestrator | Saturday 14 February 2026 03:17:12 +0000 (0:00:00.835) 0:00:20.006 ***** 2026-02-14 03:17:19.339827 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:17:19.339832 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:17:19.339837 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:17:19.339842 | orchestrator | 2026-02-14 03:17:19.339848 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-14 03:17:19.339857 | orchestrator | Saturday 14 February 2026 03:17:19 +0000 (0:00:06.951) 0:00:26.958 ***** 2026-02-14 03:18:55.987451 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:18:55.987553 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:18:55.987563 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:18:55.987571 | orchestrator | 2026-02-14 03:18:55.987579 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 03:18:55.987588 | orchestrator | 2026-02-14 03:18:55.987596 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 03:18:55.987603 | orchestrator | Saturday 14 February 2026 03:17:19 +0000 (0:00:00.471) 0:00:27.429 ***** 2026-02-14 03:18:55.987610 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:18:55.987619 | orchestrator | 2026-02-14 03:18:55.987626 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 03:18:55.987633 | orchestrator | Saturday 14 February 2026 03:17:20 +0000 (0:00:00.594) 0:00:28.023 ***** 2026-02-14 03:18:55.987640 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:18:55.987647 | orchestrator | 2026-02-14 03:18:55.987654 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 03:18:55.987661 | orchestrator | Saturday 14 February 2026 03:17:20 +0000 (0:00:00.223) 0:00:28.247 ***** 2026-02-14 03:18:55.987669 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:18:55.987676 | orchestrator | 2026-02-14 03:18:55.987683 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 03:18:55.987747 | orchestrator | Saturday 14 February 2026 03:17:22 +0000 (0:00:01.615) 0:00:29.863 ***** 2026-02-14 03:18:55.987755 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:18:55.987762 | orchestrator | 2026-02-14 03:18:55.987769 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 03:18:55.987776 | orchestrator | 2026-02-14 03:18:55.987783 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 03:18:55.987790 | orchestrator | Saturday 14 February 2026 03:18:17 +0000 (0:00:54.959) 0:01:24.823 ***** 2026-02-14 03:18:55.987878 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:18:55.987886 | orchestrator | 2026-02-14 03:18:55.987893 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 03:18:55.987900 | orchestrator | Saturday 14 February 2026 03:18:17 +0000 (0:00:00.623) 0:01:25.446 ***** 2026-02-14 03:18:55.987907 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:18:55.987914 | orchestrator | 2026-02-14 03:18:55.987921 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 03:18:55.987929 | orchestrator | Saturday 14 February 2026 03:18:18 +0000 (0:00:00.226) 0:01:25.672 ***** 2026-02-14 03:18:55.987936 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:18:55.987943 | orchestrator | 2026-02-14 03:18:55.987950 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 03:18:55.987973 | orchestrator | Saturday 14 February 2026 03:18:24 +0000 (0:00:06.530) 0:01:32.203 ***** 2026-02-14 03:18:55.987980 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:18:55.987988 | orchestrator | 2026-02-14 03:18:55.987995 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 03:18:55.988002 | orchestrator | 2026-02-14 03:18:55.988009 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 03:18:55.988017 | orchestrator | Saturday 14 February 2026 03:18:35 +0000 (0:00:10.592) 0:01:42.796 ***** 2026-02-14 03:18:55.988024 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:18:55.988032 | orchestrator | 2026-02-14 03:18:55.988060 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 03:18:55.988068 | orchestrator | Saturday 14 February 2026 03:18:35 +0000 (0:00:00.759) 0:01:43.556 ***** 2026-02-14 03:18:55.988075 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:18:55.988083 | orchestrator | 2026-02-14 03:18:55.988091 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 03:18:55.988099 | orchestrator | Saturday 14 February 2026 03:18:36 +0000 (0:00:00.224) 0:01:43.781 ***** 2026-02-14 03:18:55.988107 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:18:55.988115 | orchestrator | 2026-02-14 03:18:55.988122 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 03:18:55.988130 | orchestrator | Saturday 14 February 2026 03:18:37 +0000 (0:00:01.495) 0:01:45.276 ***** 2026-02-14 03:18:55.988138 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:18:55.988146 | orchestrator | 2026-02-14 03:18:55.988153 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-14 03:18:55.988160 | orchestrator | 2026-02-14 03:18:55.988168 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-14 03:18:55.988176 | orchestrator | Saturday 14 February 2026 03:18:52 +0000 (0:00:14.814) 0:02:00.090 ***** 2026-02-14 03:18:55.988183 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:18:55.988191 | orchestrator | 2026-02-14 03:18:55.988199 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-14 03:18:55.988207 | orchestrator | Saturday 14 February 2026 03:18:52 +0000 (0:00:00.473) 0:02:00.563 ***** 2026-02-14 03:18:55.988214 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-14 03:18:55.988222 | orchestrator | enable_outward_rabbitmq_True 2026-02-14 03:18:55.988229 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-14 03:18:55.988237 | orchestrator | outward_rabbitmq_restart 2026-02-14 03:18:55.988244 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:18:55.988252 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:18:55.988260 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:18:55.988267 | orchestrator | 2026-02-14 03:18:55.988275 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-14 03:18:55.988283 | orchestrator | skipping: no hosts matched 2026-02-14 03:18:55.988290 | orchestrator | 2026-02-14 03:18:55.988297 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-14 03:18:55.988306 | orchestrator | skipping: no hosts matched 2026-02-14 03:18:55.988313 | orchestrator | 2026-02-14 03:18:55.988321 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-14 03:18:55.988329 | orchestrator | skipping: no hosts matched 2026-02-14 03:18:55.988337 | orchestrator | 2026-02-14 03:18:55.988345 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:18:55.988369 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-14 03:18:55.988379 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:18:55.988386 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:18:55.988393 | orchestrator | 2026-02-14 03:18:55.988400 | orchestrator | 2026-02-14 03:18:55.988407 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:18:55.988414 | orchestrator | Saturday 14 February 2026 03:18:55 +0000 (0:00:02.718) 0:02:03.282 ***** 2026-02-14 03:18:55.988421 | orchestrator | =============================================================================== 2026-02-14 03:18:55.988428 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.37s 2026-02-14 03:18:55.988435 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.64s 2026-02-14 03:18:55.988447 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.95s 2026-02-14 03:18:55.988454 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.72s 2026-02-14 03:18:55.988461 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2026-02-14 03:18:55.988468 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.63s 2026-02-14 03:18:55.988475 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.58s 2026-02-14 03:18:55.988482 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.57s 2026-02-14 03:18:55.988489 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.41s 2026-02-14 03:18:55.988497 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.37s 2026-02-14 03:18:55.988504 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.32s 2026-02-14 03:18:55.988511 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.31s 2026-02-14 03:18:55.988518 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2026-02-14 03:18:55.988525 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2026-02-14 03:18:55.988536 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2026-02-14 03:18:55.988543 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.84s 2026-02-14 03:18:55.988550 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.84s 2026-02-14 03:18:55.988557 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.78s 2026-02-14 03:18:55.988564 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.68s 2026-02-14 03:18:55.988571 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.57s 2026-02-14 03:18:58.293864 | orchestrator | 2026-02-14 03:18:58 | INFO  | Task 16fee028-92c9-43e3-b7ea-409a194f350c (openvswitch) was prepared for execution. 2026-02-14 03:18:58.293928 | orchestrator | 2026-02-14 03:18:58 | INFO  | It takes a moment until task 16fee028-92c9-43e3-b7ea-409a194f350c (openvswitch) has been started and output is visible here. 2026-02-14 03:19:10.564145 | orchestrator | 2026-02-14 03:19:10.564259 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:19:10.564276 | orchestrator | 2026-02-14 03:19:10.564288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:19:10.564299 | orchestrator | Saturday 14 February 2026 03:19:02 +0000 (0:00:00.267) 0:00:00.267 ***** 2026-02-14 03:19:10.564311 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:19:10.564323 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:19:10.564334 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:19:10.564345 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:19:10.564356 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:19:10.564367 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:19:10.564377 | orchestrator | 2026-02-14 03:19:10.564389 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:19:10.564400 | orchestrator | Saturday 14 February 2026 03:19:03 +0000 (0:00:00.664) 0:00:00.931 ***** 2026-02-14 03:19:10.564411 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564422 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564433 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564444 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564455 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564466 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 03:19:10.564477 | orchestrator | 2026-02-14 03:19:10.564512 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-14 03:19:10.564524 | orchestrator | 2026-02-14 03:19:10.564536 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-14 03:19:10.564547 | orchestrator | Saturday 14 February 2026 03:19:03 +0000 (0:00:00.571) 0:00:01.503 ***** 2026-02-14 03:19:10.564559 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:19:10.564571 | orchestrator | 2026-02-14 03:19:10.564582 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-14 03:19:10.564593 | orchestrator | Saturday 14 February 2026 03:19:04 +0000 (0:00:01.094) 0:00:02.597 ***** 2026-02-14 03:19:10.564604 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-14 03:19:10.564616 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-14 03:19:10.564626 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-14 03:19:10.564637 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-14 03:19:10.564648 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-14 03:19:10.564659 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-14 03:19:10.564669 | orchestrator | 2026-02-14 03:19:10.564682 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-14 03:19:10.564695 | orchestrator | Saturday 14 February 2026 03:19:05 +0000 (0:00:01.185) 0:00:03.783 ***** 2026-02-14 03:19:10.564708 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-14 03:19:10.564754 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-14 03:19:10.564772 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-14 03:19:10.564790 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-14 03:19:10.564809 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-14 03:19:10.564821 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-14 03:19:10.564834 | orchestrator | 2026-02-14 03:19:10.564846 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-14 03:19:10.564858 | orchestrator | Saturday 14 February 2026 03:19:07 +0000 (0:00:01.434) 0:00:05.217 ***** 2026-02-14 03:19:10.564871 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-14 03:19:10.564883 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:19:10.564897 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-14 03:19:10.564909 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:19:10.564921 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-14 03:19:10.564934 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:19:10.564946 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-14 03:19:10.564959 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:19:10.564971 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-14 03:19:10.564983 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:19:10.564996 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-14 03:19:10.565008 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:19:10.565021 | orchestrator | 2026-02-14 03:19:10.565033 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-14 03:19:10.565045 | orchestrator | Saturday 14 February 2026 03:19:08 +0000 (0:00:01.140) 0:00:06.358 ***** 2026-02-14 03:19:10.565056 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:19:10.565067 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:19:10.565079 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:19:10.565097 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:19:10.565114 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:19:10.565130 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:19:10.565147 | orchestrator | 2026-02-14 03:19:10.565164 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-14 03:19:10.565196 | orchestrator | Saturday 14 February 2026 03:19:09 +0000 (0:00:00.735) 0:00:07.094 ***** 2026-02-14 03:19:10.565244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:10.565272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:10.565291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:10.565418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:10.565450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:10.565475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.889887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.889988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890144 | orchestrator | 2026-02-14 03:19:12.890155 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-14 03:19:12.890167 | orchestrator | Saturday 14 February 2026 03:19:10 +0000 (0:00:01.386) 0:00:08.480 ***** 2026-02-14 03:19:12.890177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:12.890247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526443 | orchestrator | 2026-02-14 03:19:15.526455 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-14 03:19:15.526467 | orchestrator | Saturday 14 February 2026 03:19:12 +0000 (0:00:02.328) 0:00:10.808 ***** 2026-02-14 03:19:15.526478 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:19:15.526490 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:19:15.526500 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:19:15.526510 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:19:15.526520 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:19:15.526530 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:19:15.526541 | orchestrator | 2026-02-14 03:19:15.526552 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-14 03:19:15.526563 | orchestrator | Saturday 14 February 2026 03:19:13 +0000 (0:00:00.950) 0:00:11.759 ***** 2026-02-14 03:19:15.526574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:15.526639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 03:19:40.544597 | orchestrator | 2026-02-14 03:19:40.544609 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544621 | orchestrator | Saturday 14 February 2026 03:19:15 +0000 (0:00:01.681) 0:00:13.440 ***** 2026-02-14 03:19:40.544631 | orchestrator | 2026-02-14 03:19:40.544640 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544650 | orchestrator | Saturday 14 February 2026 03:19:15 +0000 (0:00:00.297) 0:00:13.738 ***** 2026-02-14 03:19:40.544667 | orchestrator | 2026-02-14 03:19:40.544677 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544687 | orchestrator | Saturday 14 February 2026 03:19:16 +0000 (0:00:00.132) 0:00:13.871 ***** 2026-02-14 03:19:40.544696 | orchestrator | 2026-02-14 03:19:40.544706 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544716 | orchestrator | Saturday 14 February 2026 03:19:16 +0000 (0:00:00.129) 0:00:14.000 ***** 2026-02-14 03:19:40.544725 | orchestrator | 2026-02-14 03:19:40.544735 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544745 | orchestrator | Saturday 14 February 2026 03:19:16 +0000 (0:00:00.156) 0:00:14.157 ***** 2026-02-14 03:19:40.544755 | orchestrator | 2026-02-14 03:19:40.544819 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 03:19:40.544830 | orchestrator | Saturday 14 February 2026 03:19:16 +0000 (0:00:00.126) 0:00:14.283 ***** 2026-02-14 03:19:40.544840 | orchestrator | 2026-02-14 03:19:40.544850 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-14 03:19:40.544859 | orchestrator | Saturday 14 February 2026 03:19:16 +0000 (0:00:00.126) 0:00:14.410 ***** 2026-02-14 03:19:40.544869 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:19:40.544880 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:19:40.544890 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:19:40.544899 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:19:40.544911 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:19:40.544921 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:19:40.544932 | orchestrator | 2026-02-14 03:19:40.544942 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-14 03:19:40.544953 | orchestrator | Saturday 14 February 2026 03:19:25 +0000 (0:00:08.792) 0:00:23.202 ***** 2026-02-14 03:19:40.544964 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:19:40.544980 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:19:40.544992 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:19:40.545002 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:19:40.545013 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:19:40.545025 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:19:40.545035 | orchestrator | 2026-02-14 03:19:40.545046 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-14 03:19:40.545057 | orchestrator | Saturday 14 February 2026 03:19:26 +0000 (0:00:01.079) 0:00:24.281 ***** 2026-02-14 03:19:40.545068 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:19:40.545079 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:19:40.545089 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:19:40.545099 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:19:40.545108 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:19:40.545118 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:19:40.545127 | orchestrator | 2026-02-14 03:19:40.545137 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-14 03:19:40.545146 | orchestrator | Saturday 14 February 2026 03:19:34 +0000 (0:00:07.788) 0:00:32.070 ***** 2026-02-14 03:19:40.545156 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-14 03:19:40.545166 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-14 03:19:40.545176 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-14 03:19:40.545185 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-14 03:19:40.545195 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-14 03:19:40.545205 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-14 03:19:40.545214 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-14 03:19:40.545237 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-14 03:19:53.471924 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-14 03:19:53.472052 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-14 03:19:53.472070 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-14 03:19:53.472082 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-14 03:19:53.472094 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472105 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472116 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472128 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472139 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472150 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 03:19:53.472161 | orchestrator | 2026-02-14 03:19:53.472174 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-14 03:19:53.472186 | orchestrator | Saturday 14 February 2026 03:19:40 +0000 (0:00:06.308) 0:00:38.378 ***** 2026-02-14 03:19:53.472199 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-14 03:19:53.472211 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:19:53.472224 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-14 03:19:53.472235 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:19:53.472246 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-14 03:19:53.472258 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:19:53.472269 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-14 03:19:53.472280 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-14 03:19:53.472291 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-14 03:19:53.472302 | orchestrator | 2026-02-14 03:19:53.472313 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-14 03:19:53.472324 | orchestrator | Saturday 14 February 2026 03:19:42 +0000 (0:00:02.338) 0:00:40.717 ***** 2026-02-14 03:19:53.472336 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-14 03:19:53.472347 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:19:53.472359 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-14 03:19:53.472372 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:19:53.472386 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-14 03:19:53.472406 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:19:53.472427 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-14 03:19:53.472441 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-14 03:19:53.472470 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-14 03:19:53.472483 | orchestrator | 2026-02-14 03:19:53.472494 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-14 03:19:53.472505 | orchestrator | Saturday 14 February 2026 03:19:45 +0000 (0:00:03.011) 0:00:43.728 ***** 2026-02-14 03:19:53.472516 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:19:53.472527 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:19:53.472560 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:19:53.472572 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:19:53.472583 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:19:53.472594 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:19:53.472605 | orchestrator | 2026-02-14 03:19:53.472616 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:19:53.472631 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 03:19:53.472653 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 03:19:53.472672 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 03:19:53.472683 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:19:53.472694 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:19:53.472705 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:19:53.472716 | orchestrator | 2026-02-14 03:19:53.472727 | orchestrator | 2026-02-14 03:19:53.472738 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:19:53.472750 | orchestrator | Saturday 14 February 2026 03:19:53 +0000 (0:00:07.182) 0:00:50.911 ***** 2026-02-14 03:19:53.472811 | orchestrator | =============================================================================== 2026-02-14 03:19:53.472840 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.97s 2026-02-14 03:19:53.472859 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.79s 2026-02-14 03:19:53.472877 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.31s 2026-02-14 03:19:53.472893 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.01s 2026-02-14 03:19:53.472911 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.34s 2026-02-14 03:19:53.472929 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.33s 2026-02-14 03:19:53.472945 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.68s 2026-02-14 03:19:53.472963 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2026-02-14 03:19:53.472979 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.39s 2026-02-14 03:19:53.472997 | orchestrator | module-load : Load modules ---------------------------------------------- 1.19s 2026-02-14 03:19:53.473015 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.14s 2026-02-14 03:19:53.473033 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.09s 2026-02-14 03:19:53.473052 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.08s 2026-02-14 03:19:53.473070 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.97s 2026-02-14 03:19:53.473088 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.95s 2026-02-14 03:19:53.473106 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.74s 2026-02-14 03:19:53.473125 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-02-14 03:19:53.473144 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-02-14 03:19:55.765770 | orchestrator | 2026-02-14 03:19:55 | INFO  | Task 3750e368-9835-4970-b038-4d8903bdfc76 (ovn) was prepared for execution. 2026-02-14 03:19:55.765926 | orchestrator | 2026-02-14 03:19:55 | INFO  | It takes a moment until task 3750e368-9835-4970-b038-4d8903bdfc76 (ovn) has been started and output is visible here. 2026-02-14 03:20:06.210630 | orchestrator | 2026-02-14 03:20:06.210778 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:20:06.210799 | orchestrator | 2026-02-14 03:20:06.210876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:20:06.210889 | orchestrator | Saturday 14 February 2026 03:19:59 +0000 (0:00:00.171) 0:00:00.171 ***** 2026-02-14 03:20:06.210901 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:20:06.210914 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:20:06.210925 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:20:06.210936 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:20:06.210947 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:20:06.210958 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:20:06.210970 | orchestrator | 2026-02-14 03:20:06.210981 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:20:06.210992 | orchestrator | Saturday 14 February 2026 03:20:00 +0000 (0:00:00.705) 0:00:00.876 ***** 2026-02-14 03:20:06.211022 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-14 03:20:06.211034 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-14 03:20:06.211045 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-14 03:20:06.211056 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-14 03:20:06.211067 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-14 03:20:06.211078 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-14 03:20:06.211089 | orchestrator | 2026-02-14 03:20:06.211101 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-14 03:20:06.211113 | orchestrator | 2026-02-14 03:20:06.211124 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-14 03:20:06.211136 | orchestrator | Saturday 14 February 2026 03:20:01 +0000 (0:00:00.785) 0:00:01.661 ***** 2026-02-14 03:20:06.211150 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:20:06.211164 | orchestrator | 2026-02-14 03:20:06.211177 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-14 03:20:06.211190 | orchestrator | Saturday 14 February 2026 03:20:02 +0000 (0:00:01.063) 0:00:02.725 ***** 2026-02-14 03:20:06.211206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211328 | orchestrator | 2026-02-14 03:20:06.211341 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-14 03:20:06.211354 | orchestrator | Saturday 14 February 2026 03:20:03 +0000 (0:00:01.189) 0:00:03.915 ***** 2026-02-14 03:20:06.211373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211459 | orchestrator | 2026-02-14 03:20:06.211472 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-14 03:20:06.211485 | orchestrator | Saturday 14 February 2026 03:20:05 +0000 (0:00:01.555) 0:00:05.471 ***** 2026-02-14 03:20:06.211498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:06.211532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866841 | orchestrator | 2026-02-14 03:20:30.866894 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-14 03:20:30.866907 | orchestrator | Saturday 14 February 2026 03:20:06 +0000 (0:00:01.099) 0:00:06.570 ***** 2026-02-14 03:20:30.866918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.866988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867028 | orchestrator | 2026-02-14 03:20:30.867040 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-14 03:20:30.867051 | orchestrator | Saturday 14 February 2026 03:20:07 +0000 (0:00:01.478) 0:00:08.048 ***** 2026-02-14 03:20:30.867069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:20:30.867145 | orchestrator | 2026-02-14 03:20:30.867157 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-14 03:20:30.867168 | orchestrator | Saturday 14 February 2026 03:20:08 +0000 (0:00:01.314) 0:00:09.363 ***** 2026-02-14 03:20:30.867180 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:20:30.867192 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:20:30.867203 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:20:30.867214 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:20:30.867227 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:20:30.867238 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:20:30.867250 | orchestrator | 2026-02-14 03:20:30.867263 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-14 03:20:30.867275 | orchestrator | Saturday 14 February 2026 03:20:11 +0000 (0:00:02.417) 0:00:11.781 ***** 2026-02-14 03:20:30.867287 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-14 03:20:30.867300 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-14 03:20:30.867312 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-14 03:20:30.867323 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-14 03:20:30.867335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-14 03:20:30.867347 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-14 03:20:30.867367 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803716 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803820 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803847 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 03:21:04.803873 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.803883 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.803998 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.804017 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.804030 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.804043 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-14 03:21:04.804058 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804073 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804087 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804104 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 03:21:04.804120 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804128 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 03:21:04.804167 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804183 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804206 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 03:21:04.804215 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 03:21:04.804223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 03:21:04.804233 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 03:21:04.804242 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 03:21:04.804251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 03:21:04.804261 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-14 03:21:04.804272 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-14 03:21:04.804307 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 03:21:04.804317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-14 03:21:04.804339 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-14 03:21:04.804353 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-14 03:21:04.804367 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 03:21:04.804381 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 03:21:04.804394 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-14 03:21:04.804409 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 03:21:04.804424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 03:21:04.804438 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 03:21:04.804451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 03:21:04.804462 | orchestrator | 2026-02-14 03:21:04.804472 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804482 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:18.906) 0:00:30.688 ***** 2026-02-14 03:21:04.804491 | orchestrator | 2026-02-14 03:21:04.804500 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804510 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.216) 0:00:30.904 ***** 2026-02-14 03:21:04.804519 | orchestrator | 2026-02-14 03:21:04.804528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804537 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.063) 0:00:30.967 ***** 2026-02-14 03:21:04.804544 | orchestrator | 2026-02-14 03:21:04.804552 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804560 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.063) 0:00:31.031 ***** 2026-02-14 03:21:04.804568 | orchestrator | 2026-02-14 03:21:04.804576 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804584 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.062) 0:00:31.093 ***** 2026-02-14 03:21:04.804591 | orchestrator | 2026-02-14 03:21:04.804599 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 03:21:04.804607 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.065) 0:00:31.159 ***** 2026-02-14 03:21:04.804615 | orchestrator | 2026-02-14 03:21:04.804623 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-14 03:21:04.804631 | orchestrator | Saturday 14 February 2026 03:20:30 +0000 (0:00:00.064) 0:00:31.224 ***** 2026-02-14 03:21:04.804639 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:21:04.804647 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:21:04.804655 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:21:04.804663 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:04.804671 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:04.804680 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:04.804694 | orchestrator | 2026-02-14 03:21:04.804707 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-14 03:21:04.804719 | orchestrator | Saturday 14 February 2026 03:20:32 +0000 (0:00:01.582) 0:00:32.807 ***** 2026-02-14 03:21:04.804741 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:04.804755 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:21:04.804769 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:21:04.804783 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:21:04.804796 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:21:04.804807 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:21:04.804815 | orchestrator | 2026-02-14 03:21:04.804823 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-14 03:21:04.804831 | orchestrator | 2026-02-14 03:21:04.804839 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 03:21:04.804857 | orchestrator | Saturday 14 February 2026 03:21:02 +0000 (0:00:30.229) 0:01:03.036 ***** 2026-02-14 03:21:04.804865 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:21:04.804873 | orchestrator | 2026-02-14 03:21:04.804881 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 03:21:04.804889 | orchestrator | Saturday 14 February 2026 03:21:03 +0000 (0:00:00.674) 0:01:03.711 ***** 2026-02-14 03:21:04.804897 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:21:04.804923 | orchestrator | 2026-02-14 03:21:04.804932 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-14 03:21:04.804940 | orchestrator | Saturday 14 February 2026 03:21:03 +0000 (0:00:00.534) 0:01:04.245 ***** 2026-02-14 03:21:04.804948 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:04.804956 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:04.804964 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:04.804972 | orchestrator | 2026-02-14 03:21:04.804980 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-14 03:21:04.804995 | orchestrator | Saturday 14 February 2026 03:21:04 +0000 (0:00:00.916) 0:01:05.161 ***** 2026-02-14 03:21:15.677277 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.677394 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.677411 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.677423 | orchestrator | 2026-02-14 03:21:15.677436 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-14 03:21:15.677465 | orchestrator | Saturday 14 February 2026 03:21:05 +0000 (0:00:00.327) 0:01:05.489 ***** 2026-02-14 03:21:15.677477 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.677488 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.677499 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.677511 | orchestrator | 2026-02-14 03:21:15.677522 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-14 03:21:15.677534 | orchestrator | Saturday 14 February 2026 03:21:05 +0000 (0:00:00.312) 0:01:05.802 ***** 2026-02-14 03:21:15.677545 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.677556 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.677567 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.677578 | orchestrator | 2026-02-14 03:21:15.677589 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-14 03:21:15.677600 | orchestrator | Saturday 14 February 2026 03:21:05 +0000 (0:00:00.326) 0:01:06.129 ***** 2026-02-14 03:21:15.677611 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.677629 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.677648 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.677666 | orchestrator | 2026-02-14 03:21:15.677685 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-14 03:21:15.677704 | orchestrator | Saturday 14 February 2026 03:21:06 +0000 (0:00:00.481) 0:01:06.611 ***** 2026-02-14 03:21:15.677722 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.677740 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.677757 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.677774 | orchestrator | 2026-02-14 03:21:15.677793 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-14 03:21:15.677844 | orchestrator | Saturday 14 February 2026 03:21:06 +0000 (0:00:00.290) 0:01:06.901 ***** 2026-02-14 03:21:15.677864 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.677883 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.677903 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.677947 | orchestrator | 2026-02-14 03:21:15.677967 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-14 03:21:15.677985 | orchestrator | Saturday 14 February 2026 03:21:06 +0000 (0:00:00.282) 0:01:07.184 ***** 2026-02-14 03:21:15.678002 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678097 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678122 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678181 | orchestrator | 2026-02-14 03:21:15.678202 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-14 03:21:15.678221 | orchestrator | Saturday 14 February 2026 03:21:07 +0000 (0:00:00.304) 0:01:07.488 ***** 2026-02-14 03:21:15.678236 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678253 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678269 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678288 | orchestrator | 2026-02-14 03:21:15.678307 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-14 03:21:15.678325 | orchestrator | Saturday 14 February 2026 03:21:07 +0000 (0:00:00.309) 0:01:07.798 ***** 2026-02-14 03:21:15.678348 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678372 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678391 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678408 | orchestrator | 2026-02-14 03:21:15.678426 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-14 03:21:15.678444 | orchestrator | Saturday 14 February 2026 03:21:07 +0000 (0:00:00.486) 0:01:08.284 ***** 2026-02-14 03:21:15.678460 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678476 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678493 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678511 | orchestrator | 2026-02-14 03:21:15.678529 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-14 03:21:15.678549 | orchestrator | Saturday 14 February 2026 03:21:08 +0000 (0:00:00.301) 0:01:08.586 ***** 2026-02-14 03:21:15.678568 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678585 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678602 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678619 | orchestrator | 2026-02-14 03:21:15.678638 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-14 03:21:15.678656 | orchestrator | Saturday 14 February 2026 03:21:08 +0000 (0:00:00.296) 0:01:08.882 ***** 2026-02-14 03:21:15.678675 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678692 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678709 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678728 | orchestrator | 2026-02-14 03:21:15.678745 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-14 03:21:15.678764 | orchestrator | Saturday 14 February 2026 03:21:08 +0000 (0:00:00.343) 0:01:09.226 ***** 2026-02-14 03:21:15.678783 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678803 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678820 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.678838 | orchestrator | 2026-02-14 03:21:15.678855 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-14 03:21:15.678874 | orchestrator | Saturday 14 February 2026 03:21:09 +0000 (0:00:00.494) 0:01:09.721 ***** 2026-02-14 03:21:15.678892 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.678911 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.678984 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.679006 | orchestrator | 2026-02-14 03:21:15.679027 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-14 03:21:15.679066 | orchestrator | Saturday 14 February 2026 03:21:09 +0000 (0:00:00.297) 0:01:10.018 ***** 2026-02-14 03:21:15.679083 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.679102 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.679121 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.679139 | orchestrator | 2026-02-14 03:21:15.679158 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-14 03:21:15.679176 | orchestrator | Saturday 14 February 2026 03:21:09 +0000 (0:00:00.295) 0:01:10.313 ***** 2026-02-14 03:21:15.679226 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.679246 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.679265 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.679282 | orchestrator | 2026-02-14 03:21:15.679299 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 03:21:15.679330 | orchestrator | Saturday 14 February 2026 03:21:10 +0000 (0:00:00.267) 0:01:10.581 ***** 2026-02-14 03:21:15.679379 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:21:15.679400 | orchestrator | 2026-02-14 03:21:15.679417 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-14 03:21:15.679436 | orchestrator | Saturday 14 February 2026 03:21:10 +0000 (0:00:00.730) 0:01:11.312 ***** 2026-02-14 03:21:15.679454 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.679474 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.679492 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.679508 | orchestrator | 2026-02-14 03:21:15.679526 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-14 03:21:15.679544 | orchestrator | Saturday 14 February 2026 03:21:11 +0000 (0:00:00.450) 0:01:11.762 ***** 2026-02-14 03:21:15.679563 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:15.679581 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:15.679601 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:15.679618 | orchestrator | 2026-02-14 03:21:15.679636 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-14 03:21:15.679654 | orchestrator | Saturday 14 February 2026 03:21:11 +0000 (0:00:00.420) 0:01:12.183 ***** 2026-02-14 03:21:15.679673 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.679694 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.679710 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.679726 | orchestrator | 2026-02-14 03:21:15.679742 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-14 03:21:15.679760 | orchestrator | Saturday 14 February 2026 03:21:12 +0000 (0:00:00.323) 0:01:12.506 ***** 2026-02-14 03:21:15.679779 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.679796 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.679814 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.679831 | orchestrator | 2026-02-14 03:21:15.679847 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-14 03:21:15.679864 | orchestrator | Saturday 14 February 2026 03:21:12 +0000 (0:00:00.559) 0:01:13.066 ***** 2026-02-14 03:21:15.679882 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.679900 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.679918 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.680011 | orchestrator | 2026-02-14 03:21:15.680031 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-14 03:21:15.680049 | orchestrator | Saturday 14 February 2026 03:21:13 +0000 (0:00:00.328) 0:01:13.394 ***** 2026-02-14 03:21:15.680067 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.680084 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.680102 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.680120 | orchestrator | 2026-02-14 03:21:15.680139 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-14 03:21:15.680157 | orchestrator | Saturday 14 February 2026 03:21:13 +0000 (0:00:00.335) 0:01:13.730 ***** 2026-02-14 03:21:15.680198 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.680217 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.680235 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.680254 | orchestrator | 2026-02-14 03:21:15.680272 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-14 03:21:15.680291 | orchestrator | Saturday 14 February 2026 03:21:13 +0000 (0:00:00.307) 0:01:14.038 ***** 2026-02-14 03:21:15.680310 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:15.680327 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:15.680343 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:15.680359 | orchestrator | 2026-02-14 03:21:15.680377 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-14 03:21:15.680393 | orchestrator | Saturday 14 February 2026 03:21:14 +0000 (0:00:00.540) 0:01:14.578 ***** 2026-02-14 03:21:15.680413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:15.680434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:15.680451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:15.680496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900870 | orchestrator | 2026-02-14 03:21:21.900883 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-14 03:21:21.900895 | orchestrator | Saturday 14 February 2026 03:21:15 +0000 (0:00:01.460) 0:01:16.039 ***** 2026-02-14 03:21:21.900907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.900994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901138 | orchestrator | 2026-02-14 03:21:21.901156 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-14 03:21:21.901177 | orchestrator | Saturday 14 February 2026 03:21:19 +0000 (0:00:03.795) 0:01:19.835 ***** 2026-02-14 03:21:21.901198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:21.901294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.760411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.760550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.760567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.760580 | orchestrator | 2026-02-14 03:21:40.760593 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:40.760606 | orchestrator | Saturday 14 February 2026 03:21:21 +0000 (0:00:02.052) 0:01:21.888 ***** 2026-02-14 03:21:40.760617 | orchestrator | 2026-02-14 03:21:40.760627 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:40.760638 | orchestrator | Saturday 14 February 2026 03:21:21 +0000 (0:00:00.076) 0:01:21.964 ***** 2026-02-14 03:21:40.760649 | orchestrator | 2026-02-14 03:21:40.760659 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:40.760670 | orchestrator | Saturday 14 February 2026 03:21:21 +0000 (0:00:00.230) 0:01:22.194 ***** 2026-02-14 03:21:40.760681 | orchestrator | 2026-02-14 03:21:40.760692 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-14 03:21:40.760703 | orchestrator | Saturday 14 February 2026 03:21:21 +0000 (0:00:00.064) 0:01:22.259 ***** 2026-02-14 03:21:40.760714 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:40.760726 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:21:40.760737 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:21:40.760747 | orchestrator | 2026-02-14 03:21:40.760758 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-14 03:21:40.760769 | orchestrator | Saturday 14 February 2026 03:21:24 +0000 (0:00:02.428) 0:01:24.687 ***** 2026-02-14 03:21:40.760780 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:40.760791 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:21:40.760802 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:21:40.760812 | orchestrator | 2026-02-14 03:21:40.760823 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-14 03:21:40.760840 | orchestrator | Saturday 14 February 2026 03:21:31 +0000 (0:00:07.362) 0:01:32.049 ***** 2026-02-14 03:21:40.760859 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:40.760878 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:21:40.760895 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:21:40.760915 | orchestrator | 2026-02-14 03:21:40.760934 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-14 03:21:40.760952 | orchestrator | Saturday 14 February 2026 03:21:34 +0000 (0:00:02.423) 0:01:34.472 ***** 2026-02-14 03:21:40.761011 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:21:40.761032 | orchestrator | 2026-02-14 03:21:40.761050 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-14 03:21:40.761070 | orchestrator | Saturday 14 February 2026 03:21:34 +0000 (0:00:00.121) 0:01:34.594 ***** 2026-02-14 03:21:40.761089 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:40.761108 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:40.761127 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:40.761138 | orchestrator | 2026-02-14 03:21:40.761149 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-14 03:21:40.761160 | orchestrator | Saturday 14 February 2026 03:21:35 +0000 (0:00:00.917) 0:01:35.512 ***** 2026-02-14 03:21:40.761171 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:40.761193 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:40.761204 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:40.761215 | orchestrator | 2026-02-14 03:21:40.761226 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-14 03:21:40.761237 | orchestrator | Saturday 14 February 2026 03:21:35 +0000 (0:00:00.610) 0:01:36.122 ***** 2026-02-14 03:21:40.761247 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:40.761258 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:40.761269 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:40.761279 | orchestrator | 2026-02-14 03:21:40.761290 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-14 03:21:40.761316 | orchestrator | Saturday 14 February 2026 03:21:36 +0000 (0:00:00.737) 0:01:36.859 ***** 2026-02-14 03:21:40.761328 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:21:40.761338 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:21:40.761349 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:21:40.761367 | orchestrator | 2026-02-14 03:21:40.761385 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-14 03:21:40.761403 | orchestrator | Saturday 14 February 2026 03:21:37 +0000 (0:00:00.682) 0:01:37.542 ***** 2026-02-14 03:21:40.761421 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:40.761439 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:40.761481 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:40.761501 | orchestrator | 2026-02-14 03:21:40.761517 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-14 03:21:40.761527 | orchestrator | Saturday 14 February 2026 03:21:38 +0000 (0:00:01.166) 0:01:38.708 ***** 2026-02-14 03:21:40.761538 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:40.761549 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:40.761559 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:40.761570 | orchestrator | 2026-02-14 03:21:40.761581 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-14 03:21:40.761592 | orchestrator | Saturday 14 February 2026 03:21:39 +0000 (0:00:00.730) 0:01:39.439 ***** 2026-02-14 03:21:40.761602 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:21:40.761613 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:21:40.761623 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:21:40.761634 | orchestrator | 2026-02-14 03:21:40.761645 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-14 03:21:40.761655 | orchestrator | Saturday 14 February 2026 03:21:39 +0000 (0:00:00.331) 0:01:39.771 ***** 2026-02-14 03:21:40.761668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761737 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761765 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:40.761785 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680307 | orchestrator | 2026-02-14 03:21:47.680415 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-14 03:21:47.680439 | orchestrator | Saturday 14 February 2026 03:21:40 +0000 (0:00:01.349) 0:01:41.121 ***** 2026-02-14 03:21:47.680462 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680587 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680636 | orchestrator | 2026-02-14 03:21:47.680648 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-14 03:21:47.680659 | orchestrator | Saturday 14 February 2026 03:21:44 +0000 (0:00:03.785) 0:01:44.906 ***** 2026-02-14 03:21:47.680688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680701 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680765 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 03:21:47.680804 | orchestrator | 2026-02-14 03:21:47.680815 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:47.680827 | orchestrator | Saturday 14 February 2026 03:21:47 +0000 (0:00:02.931) 0:01:47.837 ***** 2026-02-14 03:21:47.680840 | orchestrator | 2026-02-14 03:21:47.680853 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:47.680865 | orchestrator | Saturday 14 February 2026 03:21:47 +0000 (0:00:00.060) 0:01:47.898 ***** 2026-02-14 03:21:47.680878 | orchestrator | 2026-02-14 03:21:47.680890 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 03:21:47.680902 | orchestrator | Saturday 14 February 2026 03:21:47 +0000 (0:00:00.066) 0:01:47.965 ***** 2026-02-14 03:21:47.680915 | orchestrator | 2026-02-14 03:21:47.680933 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-14 03:22:11.695386 | orchestrator | Saturday 14 February 2026 03:21:47 +0000 (0:00:00.064) 0:01:48.030 ***** 2026-02-14 03:22:11.695498 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:22:11.695515 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:22:11.695526 | orchestrator | 2026-02-14 03:22:11.695538 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-14 03:22:11.695550 | orchestrator | Saturday 14 February 2026 03:21:53 +0000 (0:00:06.227) 0:01:54.258 ***** 2026-02-14 03:22:11.695561 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:22:11.695572 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:22:11.695583 | orchestrator | 2026-02-14 03:22:11.695595 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-14 03:22:11.695631 | orchestrator | Saturday 14 February 2026 03:22:00 +0000 (0:00:06.159) 0:02:00.417 ***** 2026-02-14 03:22:11.695642 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:22:11.695653 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:22:11.695664 | orchestrator | 2026-02-14 03:22:11.695675 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-14 03:22:11.695686 | orchestrator | Saturday 14 February 2026 03:22:06 +0000 (0:00:06.148) 0:02:06.565 ***** 2026-02-14 03:22:11.695697 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:22:11.695708 | orchestrator | 2026-02-14 03:22:11.695719 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-14 03:22:11.695730 | orchestrator | Saturday 14 February 2026 03:22:06 +0000 (0:00:00.132) 0:02:06.698 ***** 2026-02-14 03:22:11.695741 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:22:11.695753 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:22:11.695764 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:22:11.695775 | orchestrator | 2026-02-14 03:22:11.695786 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-14 03:22:11.695796 | orchestrator | Saturday 14 February 2026 03:22:07 +0000 (0:00:01.000) 0:02:07.698 ***** 2026-02-14 03:22:11.695807 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:22:11.695818 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:22:11.695829 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:22:11.695840 | orchestrator | 2026-02-14 03:22:11.695851 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-14 03:22:11.695862 | orchestrator | Saturday 14 February 2026 03:22:07 +0000 (0:00:00.649) 0:02:08.347 ***** 2026-02-14 03:22:11.695874 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:22:11.695885 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:22:11.695896 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:22:11.695906 | orchestrator | 2026-02-14 03:22:11.695918 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-14 03:22:11.695931 | orchestrator | Saturday 14 February 2026 03:22:08 +0000 (0:00:00.797) 0:02:09.145 ***** 2026-02-14 03:22:11.695943 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:22:11.695955 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:22:11.695968 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:22:11.695981 | orchestrator | 2026-02-14 03:22:11.695993 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-14 03:22:11.696005 | orchestrator | Saturday 14 February 2026 03:22:09 +0000 (0:00:00.687) 0:02:09.832 ***** 2026-02-14 03:22:11.696049 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:22:11.696062 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:22:11.696074 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:22:11.696087 | orchestrator | 2026-02-14 03:22:11.696099 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-14 03:22:11.696111 | orchestrator | Saturday 14 February 2026 03:22:10 +0000 (0:00:00.995) 0:02:10.828 ***** 2026-02-14 03:22:11.696124 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:22:11.696136 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:22:11.696148 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:22:11.696160 | orchestrator | 2026-02-14 03:22:11.696172 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:22:11.696186 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-14 03:22:11.696206 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-14 03:22:11.696225 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-14 03:22:11.696253 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:22:11.696289 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:22:11.696308 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:22:11.696326 | orchestrator | 2026-02-14 03:22:11.696344 | orchestrator | 2026-02-14 03:22:11.696381 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:22:11.696401 | orchestrator | Saturday 14 February 2026 03:22:11 +0000 (0:00:00.875) 0:02:11.703 ***** 2026-02-14 03:22:11.696421 | orchestrator | =============================================================================== 2026-02-14 03:22:11.696441 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.23s 2026-02-14 03:22:11.696460 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.91s 2026-02-14 03:22:11.696479 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.52s 2026-02-14 03:22:11.696495 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.66s 2026-02-14 03:22:11.696507 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.57s 2026-02-14 03:22:11.696539 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.80s 2026-02-14 03:22:11.696551 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.79s 2026-02-14 03:22:11.696562 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.93s 2026-02-14 03:22:11.696572 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.42s 2026-02-14 03:22:11.696583 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.05s 2026-02-14 03:22:11.696594 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.58s 2026-02-14 03:22:11.696605 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.56s 2026-02-14 03:22:11.696616 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.48s 2026-02-14 03:22:11.696626 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-02-14 03:22:11.696637 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2026-02-14 03:22:11.696648 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.31s 2026-02-14 03:22:11.696658 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-02-14 03:22:11.696669 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.17s 2026-02-14 03:22:11.696680 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.10s 2026-02-14 03:22:11.696691 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.06s 2026-02-14 03:22:12.006631 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 03:22:12.006727 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-14 03:22:14.200501 | orchestrator | 2026-02-14 03:22:14 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-14 03:22:24.368630 | orchestrator | 2026-02-14 03:22:24 | INFO  | Task 56f2cc18-90d6-4a56-9e87-d286fe1360cf (wipe-partitions) was prepared for execution. 2026-02-14 03:22:24.368742 | orchestrator | 2026-02-14 03:22:24 | INFO  | It takes a moment until task 56f2cc18-90d6-4a56-9e87-d286fe1360cf (wipe-partitions) has been started and output is visible here. 2026-02-14 03:22:37.167986 | orchestrator | 2026-02-14 03:22:37.168153 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-14 03:22:37.168172 | orchestrator | 2026-02-14 03:22:37.168184 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-14 03:22:37.168196 | orchestrator | Saturday 14 February 2026 03:22:28 +0000 (0:00:00.126) 0:00:00.126 ***** 2026-02-14 03:22:37.168230 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:22:37.168243 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:22:37.168254 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:22:37.168265 | orchestrator | 2026-02-14 03:22:37.168276 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-14 03:22:37.168287 | orchestrator | Saturday 14 February 2026 03:22:29 +0000 (0:00:00.579) 0:00:00.706 ***** 2026-02-14 03:22:37.168298 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:22:37.168309 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:22:37.168331 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:22:37.168342 | orchestrator | 2026-02-14 03:22:37.168353 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-14 03:22:37.168364 | orchestrator | Saturday 14 February 2026 03:22:29 +0000 (0:00:00.372) 0:00:01.079 ***** 2026-02-14 03:22:37.168375 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:22:37.168387 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:22:37.168398 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:22:37.168409 | orchestrator | 2026-02-14 03:22:37.168420 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-14 03:22:37.168430 | orchestrator | Saturday 14 February 2026 03:22:30 +0000 (0:00:00.621) 0:00:01.700 ***** 2026-02-14 03:22:37.168441 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:22:37.168452 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:22:37.168464 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:22:37.168475 | orchestrator | 2026-02-14 03:22:37.168486 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-14 03:22:37.168496 | orchestrator | Saturday 14 February 2026 03:22:30 +0000 (0:00:00.270) 0:00:01.970 ***** 2026-02-14 03:22:37.168507 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-14 03:22:37.168519 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-14 03:22:37.168532 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-14 03:22:37.168545 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-14 03:22:37.168557 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-14 03:22:37.168570 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-14 03:22:37.168598 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-14 03:22:37.168610 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-14 03:22:37.168622 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-14 03:22:37.168634 | orchestrator | 2026-02-14 03:22:37.168647 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-14 03:22:37.168659 | orchestrator | Saturday 14 February 2026 03:22:31 +0000 (0:00:01.232) 0:00:03.202 ***** 2026-02-14 03:22:37.168672 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-14 03:22:37.168685 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-14 03:22:37.168698 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-14 03:22:37.168710 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-14 03:22:37.168723 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-14 03:22:37.168735 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-14 03:22:37.168748 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-14 03:22:37.168760 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-14 03:22:37.168773 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-14 03:22:37.168786 | orchestrator | 2026-02-14 03:22:37.168799 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-14 03:22:37.168811 | orchestrator | Saturday 14 February 2026 03:22:33 +0000 (0:00:01.590) 0:00:04.793 ***** 2026-02-14 03:22:37.168824 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-14 03:22:37.168836 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-14 03:22:37.168848 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-14 03:22:37.168861 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-14 03:22:37.168881 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-14 03:22:37.168892 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-14 03:22:37.168903 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-14 03:22:37.168914 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-14 03:22:37.168924 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-14 03:22:37.168935 | orchestrator | 2026-02-14 03:22:37.168946 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-14 03:22:37.168957 | orchestrator | Saturday 14 February 2026 03:22:35 +0000 (0:00:02.151) 0:00:06.944 ***** 2026-02-14 03:22:37.168968 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:22:37.168979 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:22:37.168989 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:22:37.169000 | orchestrator | 2026-02-14 03:22:37.169011 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-14 03:22:37.169022 | orchestrator | Saturday 14 February 2026 03:22:35 +0000 (0:00:00.639) 0:00:07.583 ***** 2026-02-14 03:22:37.169033 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:22:37.169044 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:22:37.169055 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:22:37.169136 | orchestrator | 2026-02-14 03:22:37.169148 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:22:37.169160 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:22:37.169172 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:22:37.169201 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:22:37.169213 | orchestrator | 2026-02-14 03:22:37.169224 | orchestrator | 2026-02-14 03:22:37.169235 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:22:37.169246 | orchestrator | Saturday 14 February 2026 03:22:36 +0000 (0:00:00.716) 0:00:08.300 ***** 2026-02-14 03:22:37.169257 | orchestrator | =============================================================================== 2026-02-14 03:22:37.169268 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2026-02-14 03:22:37.169279 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-02-14 03:22:37.169290 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2026-02-14 03:22:37.169300 | orchestrator | Request device events from the kernel ----------------------------------- 0.72s 2026-02-14 03:22:37.169311 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-02-14 03:22:37.169322 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-02-14 03:22:37.169333 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-02-14 03:22:37.169343 | orchestrator | Remove all rook related logical devices --------------------------------- 0.37s 2026-02-14 03:22:37.169354 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2026-02-14 03:22:49.778253 | orchestrator | 2026-02-14 03:22:49 | INFO  | Task 68114da2-7514-45fc-93e6-dffacdacb25d (facts) was prepared for execution. 2026-02-14 03:22:49.778358 | orchestrator | 2026-02-14 03:22:49 | INFO  | It takes a moment until task 68114da2-7514-45fc-93e6-dffacdacb25d (facts) has been started and output is visible here. 2026-02-14 03:23:02.600372 | orchestrator | 2026-02-14 03:23:02.600484 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-14 03:23:02.600498 | orchestrator | 2026-02-14 03:23:02.600509 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-14 03:23:02.600518 | orchestrator | Saturday 14 February 2026 03:22:54 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-02-14 03:23:02.600549 | orchestrator | ok: [testbed-manager] 2026-02-14 03:23:02.600561 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:23:02.600569 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:23:02.600578 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:23:02.600587 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:02.600595 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:02.600604 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:02.600613 | orchestrator | 2026-02-14 03:23:02.600622 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-14 03:23:02.600632 | orchestrator | Saturday 14 February 2026 03:22:55 +0000 (0:00:01.145) 0:00:01.409 ***** 2026-02-14 03:23:02.600641 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:23:02.600651 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:23:02.600660 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:23:02.600669 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:23:02.600677 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:02.600686 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:02.600695 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:02.600703 | orchestrator | 2026-02-14 03:23:02.600712 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 03:23:02.600721 | orchestrator | 2026-02-14 03:23:02.600730 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 03:23:02.600739 | orchestrator | Saturday 14 February 2026 03:22:56 +0000 (0:00:01.233) 0:00:02.642 ***** 2026-02-14 03:23:02.600747 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:23:02.600756 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:23:02.600765 | orchestrator | ok: [testbed-manager] 2026-02-14 03:23:02.600774 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:23:02.600782 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:02.600791 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:02.600800 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:02.600808 | orchestrator | 2026-02-14 03:23:02.600817 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-14 03:23:02.600826 | orchestrator | 2026-02-14 03:23:02.600835 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-14 03:23:02.600844 | orchestrator | Saturday 14 February 2026 03:23:01 +0000 (0:00:05.211) 0:00:07.853 ***** 2026-02-14 03:23:02.600853 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:23:02.600861 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:23:02.600870 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:23:02.600879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:23:02.600887 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:02.600896 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:02.600905 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:02.600913 | orchestrator | 2026-02-14 03:23:02.600922 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:23:02.600932 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601016 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601032 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601041 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601050 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601059 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601075 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:23:02.601084 | orchestrator | 2026-02-14 03:23:02.601093 | orchestrator | 2026-02-14 03:23:02.601130 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:23:02.601147 | orchestrator | Saturday 14 February 2026 03:23:02 +0000 (0:00:00.550) 0:00:08.403 ***** 2026-02-14 03:23:02.601163 | orchestrator | =============================================================================== 2026-02-14 03:23:02.601177 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.21s 2026-02-14 03:23:02.601191 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-02-14 03:23:02.601199 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-02-14 03:23:02.601209 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-14 03:23:05.128211 | orchestrator | 2026-02-14 03:23:05 | INFO  | Task 9bccafac-33bb-4d45-9443-fe4ced7ae25e (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-14 03:23:05.128331 | orchestrator | 2026-02-14 03:23:05 | INFO  | It takes a moment until task 9bccafac-33bb-4d45-9443-fe4ced7ae25e (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-14 03:23:17.245662 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 03:23:17.245782 | orchestrator | 2.16.14 2026-02-14 03:23:17.245801 | orchestrator | 2026-02-14 03:23:17.245815 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-14 03:23:17.245827 | orchestrator | 2026-02-14 03:23:17.245838 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:23:17.245849 | orchestrator | Saturday 14 February 2026 03:23:09 +0000 (0:00:00.338) 0:00:00.338 ***** 2026-02-14 03:23:17.245861 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:17.245872 | orchestrator | 2026-02-14 03:23:17.245898 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:23:17.245910 | orchestrator | Saturday 14 February 2026 03:23:09 +0000 (0:00:00.267) 0:00:00.605 ***** 2026-02-14 03:23:17.245922 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:17.245933 | orchestrator | 2026-02-14 03:23:17.245944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.245955 | orchestrator | Saturday 14 February 2026 03:23:10 +0000 (0:00:00.249) 0:00:00.854 ***** 2026-02-14 03:23:17.245966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-14 03:23:17.245977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-14 03:23:17.245988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-14 03:23:17.245999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-14 03:23:17.246010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-14 03:23:17.246082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-14 03:23:17.246094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-14 03:23:17.246106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-14 03:23:17.246117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-14 03:23:17.246154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-14 03:23:17.246166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-14 03:23:17.246177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-14 03:23:17.246213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-14 03:23:17.246224 | orchestrator | 2026-02-14 03:23:17.246235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246246 | orchestrator | Saturday 14 February 2026 03:23:10 +0000 (0:00:00.480) 0:00:01.335 ***** 2026-02-14 03:23:17.246257 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246269 | orchestrator | 2026-02-14 03:23:17.246280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246291 | orchestrator | Saturday 14 February 2026 03:23:10 +0000 (0:00:00.200) 0:00:01.535 ***** 2026-02-14 03:23:17.246302 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246313 | orchestrator | 2026-02-14 03:23:17.246324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246334 | orchestrator | Saturday 14 February 2026 03:23:10 +0000 (0:00:00.209) 0:00:01.745 ***** 2026-02-14 03:23:17.246345 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246356 | orchestrator | 2026-02-14 03:23:17.246367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246378 | orchestrator | Saturday 14 February 2026 03:23:11 +0000 (0:00:00.194) 0:00:01.939 ***** 2026-02-14 03:23:17.246389 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246399 | orchestrator | 2026-02-14 03:23:17.246410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246421 | orchestrator | Saturday 14 February 2026 03:23:11 +0000 (0:00:00.200) 0:00:02.139 ***** 2026-02-14 03:23:17.246432 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246443 | orchestrator | 2026-02-14 03:23:17.246454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246465 | orchestrator | Saturday 14 February 2026 03:23:11 +0000 (0:00:00.224) 0:00:02.364 ***** 2026-02-14 03:23:17.246475 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246486 | orchestrator | 2026-02-14 03:23:17.246497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246508 | orchestrator | Saturday 14 February 2026 03:23:11 +0000 (0:00:00.200) 0:00:02.564 ***** 2026-02-14 03:23:17.246519 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246529 | orchestrator | 2026-02-14 03:23:17.246540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246551 | orchestrator | Saturday 14 February 2026 03:23:11 +0000 (0:00:00.214) 0:00:02.779 ***** 2026-02-14 03:23:17.246562 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.246573 | orchestrator | 2026-02-14 03:23:17.246584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246594 | orchestrator | Saturday 14 February 2026 03:23:12 +0000 (0:00:00.247) 0:00:03.027 ***** 2026-02-14 03:23:17.246605 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2) 2026-02-14 03:23:17.246618 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2) 2026-02-14 03:23:17.246628 | orchestrator | 2026-02-14 03:23:17.246639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246670 | orchestrator | Saturday 14 February 2026 03:23:12 +0000 (0:00:00.442) 0:00:03.469 ***** 2026-02-14 03:23:17.246681 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491) 2026-02-14 03:23:17.246692 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491) 2026-02-14 03:23:17.246703 | orchestrator | 2026-02-14 03:23:17.246714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246725 | orchestrator | Saturday 14 February 2026 03:23:13 +0000 (0:00:00.696) 0:00:04.166 ***** 2026-02-14 03:23:17.246742 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8) 2026-02-14 03:23:17.246762 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8) 2026-02-14 03:23:17.246773 | orchestrator | 2026-02-14 03:23:17.246784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246795 | orchestrator | Saturday 14 February 2026 03:23:14 +0000 (0:00:00.659) 0:00:04.826 ***** 2026-02-14 03:23:17.246806 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025) 2026-02-14 03:23:17.246817 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025) 2026-02-14 03:23:17.246827 | orchestrator | 2026-02-14 03:23:17.246838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:17.246849 | orchestrator | Saturday 14 February 2026 03:23:14 +0000 (0:00:00.898) 0:00:05.725 ***** 2026-02-14 03:23:17.246860 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:23:17.246871 | orchestrator | 2026-02-14 03:23:17.246881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.246892 | orchestrator | Saturday 14 February 2026 03:23:15 +0000 (0:00:00.367) 0:00:06.093 ***** 2026-02-14 03:23:17.246903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-14 03:23:17.246914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-14 03:23:17.246924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-14 03:23:17.246935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-14 03:23:17.246946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-14 03:23:17.246956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-14 03:23:17.246967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-14 03:23:17.246978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-14 03:23:17.246988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-14 03:23:17.246999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-14 03:23:17.247010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-14 03:23:17.247021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-14 03:23:17.247031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-14 03:23:17.247042 | orchestrator | 2026-02-14 03:23:17.247053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247064 | orchestrator | Saturday 14 February 2026 03:23:15 +0000 (0:00:00.387) 0:00:06.481 ***** 2026-02-14 03:23:17.247075 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247085 | orchestrator | 2026-02-14 03:23:17.247096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247107 | orchestrator | Saturday 14 February 2026 03:23:15 +0000 (0:00:00.213) 0:00:06.694 ***** 2026-02-14 03:23:17.247118 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247146 | orchestrator | 2026-02-14 03:23:17.247157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247168 | orchestrator | Saturday 14 February 2026 03:23:16 +0000 (0:00:00.216) 0:00:06.911 ***** 2026-02-14 03:23:17.247179 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247190 | orchestrator | 2026-02-14 03:23:17.247200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247211 | orchestrator | Saturday 14 February 2026 03:23:16 +0000 (0:00:00.245) 0:00:07.156 ***** 2026-02-14 03:23:17.247229 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247240 | orchestrator | 2026-02-14 03:23:17.247251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247262 | orchestrator | Saturday 14 February 2026 03:23:16 +0000 (0:00:00.205) 0:00:07.362 ***** 2026-02-14 03:23:17.247272 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247283 | orchestrator | 2026-02-14 03:23:17.247294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247305 | orchestrator | Saturday 14 February 2026 03:23:16 +0000 (0:00:00.205) 0:00:07.568 ***** 2026-02-14 03:23:17.247316 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247326 | orchestrator | 2026-02-14 03:23:17.247337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:17.247348 | orchestrator | Saturday 14 February 2026 03:23:16 +0000 (0:00:00.220) 0:00:07.789 ***** 2026-02-14 03:23:17.247359 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:17.247369 | orchestrator | 2026-02-14 03:23:17.247386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052575 | orchestrator | Saturday 14 February 2026 03:23:17 +0000 (0:00:00.235) 0:00:08.024 ***** 2026-02-14 03:23:25.052650 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052657 | orchestrator | 2026-02-14 03:23:25.052663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052667 | orchestrator | Saturday 14 February 2026 03:23:17 +0000 (0:00:00.210) 0:00:08.235 ***** 2026-02-14 03:23:25.052672 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-14 03:23:25.052678 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-14 03:23:25.052682 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-14 03:23:25.052697 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-14 03:23:25.052701 | orchestrator | 2026-02-14 03:23:25.052706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052710 | orchestrator | Saturday 14 February 2026 03:23:18 +0000 (0:00:01.047) 0:00:09.282 ***** 2026-02-14 03:23:25.052714 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052718 | orchestrator | 2026-02-14 03:23:25.052722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052726 | orchestrator | Saturday 14 February 2026 03:23:18 +0000 (0:00:00.238) 0:00:09.521 ***** 2026-02-14 03:23:25.052730 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052734 | orchestrator | 2026-02-14 03:23:25.052738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052742 | orchestrator | Saturday 14 February 2026 03:23:18 +0000 (0:00:00.216) 0:00:09.738 ***** 2026-02-14 03:23:25.052746 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052750 | orchestrator | 2026-02-14 03:23:25.052754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:25.052758 | orchestrator | Saturday 14 February 2026 03:23:19 +0000 (0:00:00.219) 0:00:09.958 ***** 2026-02-14 03:23:25.052762 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052766 | orchestrator | 2026-02-14 03:23:25.052770 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-14 03:23:25.052774 | orchestrator | Saturday 14 February 2026 03:23:19 +0000 (0:00:00.208) 0:00:10.166 ***** 2026-02-14 03:23:25.052778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-14 03:23:25.052782 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-14 03:23:25.052786 | orchestrator | 2026-02-14 03:23:25.052790 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-14 03:23:25.052794 | orchestrator | Saturday 14 February 2026 03:23:19 +0000 (0:00:00.192) 0:00:10.359 ***** 2026-02-14 03:23:25.052798 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052802 | orchestrator | 2026-02-14 03:23:25.052806 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-14 03:23:25.052810 | orchestrator | Saturday 14 February 2026 03:23:19 +0000 (0:00:00.142) 0:00:10.501 ***** 2026-02-14 03:23:25.052827 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052831 | orchestrator | 2026-02-14 03:23:25.052835 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-14 03:23:25.052839 | orchestrator | Saturday 14 February 2026 03:23:19 +0000 (0:00:00.152) 0:00:10.653 ***** 2026-02-14 03:23:25.052843 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052847 | orchestrator | 2026-02-14 03:23:25.052851 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-14 03:23:25.052855 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.149) 0:00:10.803 ***** 2026-02-14 03:23:25.052859 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:25.052864 | orchestrator | 2026-02-14 03:23:25.052868 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-14 03:23:25.052872 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.146) 0:00:10.950 ***** 2026-02-14 03:23:25.052876 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd74a1ea4-c27e-5375-be56-9d9a8e069fa6'}}) 2026-02-14 03:23:25.052881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '86d1df08-738c-52e0-accb-8c0a21213af6'}}) 2026-02-14 03:23:25.052885 | orchestrator | 2026-02-14 03:23:25.052889 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-14 03:23:25.052893 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.171) 0:00:11.121 ***** 2026-02-14 03:23:25.052897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd74a1ea4-c27e-5375-be56-9d9a8e069fa6'}})  2026-02-14 03:23:25.052903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '86d1df08-738c-52e0-accb-8c0a21213af6'}})  2026-02-14 03:23:25.052907 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052911 | orchestrator | 2026-02-14 03:23:25.052915 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-14 03:23:25.052919 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.346) 0:00:11.468 ***** 2026-02-14 03:23:25.052923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd74a1ea4-c27e-5375-be56-9d9a8e069fa6'}})  2026-02-14 03:23:25.052927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '86d1df08-738c-52e0-accb-8c0a21213af6'}})  2026-02-14 03:23:25.052931 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052935 | orchestrator | 2026-02-14 03:23:25.052939 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-14 03:23:25.052943 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.151) 0:00:11.620 ***** 2026-02-14 03:23:25.052947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd74a1ea4-c27e-5375-be56-9d9a8e069fa6'}})  2026-02-14 03:23:25.052960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '86d1df08-738c-52e0-accb-8c0a21213af6'}})  2026-02-14 03:23:25.052964 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.052968 | orchestrator | 2026-02-14 03:23:25.052972 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-14 03:23:25.052976 | orchestrator | Saturday 14 February 2026 03:23:20 +0000 (0:00:00.151) 0:00:11.771 ***** 2026-02-14 03:23:25.052980 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:25.052984 | orchestrator | 2026-02-14 03:23:25.052988 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-14 03:23:25.052995 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.148) 0:00:11.920 ***** 2026-02-14 03:23:25.052999 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:23:25.053003 | orchestrator | 2026-02-14 03:23:25.053007 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-14 03:23:25.053011 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.136) 0:00:12.057 ***** 2026-02-14 03:23:25.053018 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053022 | orchestrator | 2026-02-14 03:23:25.053026 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-14 03:23:25.053030 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.145) 0:00:12.203 ***** 2026-02-14 03:23:25.053034 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053038 | orchestrator | 2026-02-14 03:23:25.053042 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-14 03:23:25.053046 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.152) 0:00:12.355 ***** 2026-02-14 03:23:25.053050 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053054 | orchestrator | 2026-02-14 03:23:25.053058 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-14 03:23:25.053062 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.135) 0:00:12.490 ***** 2026-02-14 03:23:25.053066 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:23:25.053070 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:25.053074 | orchestrator |  "sdb": { 2026-02-14 03:23:25.053078 | orchestrator |  "osd_lvm_uuid": "d74a1ea4-c27e-5375-be56-9d9a8e069fa6" 2026-02-14 03:23:25.053082 | orchestrator |  }, 2026-02-14 03:23:25.053086 | orchestrator |  "sdc": { 2026-02-14 03:23:25.053090 | orchestrator |  "osd_lvm_uuid": "86d1df08-738c-52e0-accb-8c0a21213af6" 2026-02-14 03:23:25.053094 | orchestrator |  } 2026-02-14 03:23:25.053098 | orchestrator |  } 2026-02-14 03:23:25.053102 | orchestrator | } 2026-02-14 03:23:25.053106 | orchestrator | 2026-02-14 03:23:25.053110 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-14 03:23:25.053114 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.144) 0:00:12.635 ***** 2026-02-14 03:23:25.053118 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053122 | orchestrator | 2026-02-14 03:23:25.053126 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-14 03:23:25.053130 | orchestrator | Saturday 14 February 2026 03:23:21 +0000 (0:00:00.141) 0:00:12.776 ***** 2026-02-14 03:23:25.053134 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053199 | orchestrator | 2026-02-14 03:23:25.053205 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-14 03:23:25.053210 | orchestrator | Saturday 14 February 2026 03:23:22 +0000 (0:00:00.140) 0:00:12.916 ***** 2026-02-14 03:23:25.053215 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:23:25.053219 | orchestrator | 2026-02-14 03:23:25.053224 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-14 03:23:25.053228 | orchestrator | Saturday 14 February 2026 03:23:22 +0000 (0:00:00.143) 0:00:13.060 ***** 2026-02-14 03:23:25.053233 | orchestrator | changed: [testbed-node-3] => { 2026-02-14 03:23:25.053238 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-14 03:23:25.053242 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:25.053247 | orchestrator |  "sdb": { 2026-02-14 03:23:25.053251 | orchestrator |  "osd_lvm_uuid": "d74a1ea4-c27e-5375-be56-9d9a8e069fa6" 2026-02-14 03:23:25.053256 | orchestrator |  }, 2026-02-14 03:23:25.053261 | orchestrator |  "sdc": { 2026-02-14 03:23:25.053266 | orchestrator |  "osd_lvm_uuid": "86d1df08-738c-52e0-accb-8c0a21213af6" 2026-02-14 03:23:25.053270 | orchestrator |  } 2026-02-14 03:23:25.053275 | orchestrator |  }, 2026-02-14 03:23:25.053280 | orchestrator |  "lvm_volumes": [ 2026-02-14 03:23:25.053285 | orchestrator |  { 2026-02-14 03:23:25.053290 | orchestrator |  "data": "osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6", 2026-02-14 03:23:25.053295 | orchestrator |  "data_vg": "ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6" 2026-02-14 03:23:25.053299 | orchestrator |  }, 2026-02-14 03:23:25.053304 | orchestrator |  { 2026-02-14 03:23:25.053308 | orchestrator |  "data": "osd-block-86d1df08-738c-52e0-accb-8c0a21213af6", 2026-02-14 03:23:25.053317 | orchestrator |  "data_vg": "ceph-86d1df08-738c-52e0-accb-8c0a21213af6" 2026-02-14 03:23:25.053322 | orchestrator |  } 2026-02-14 03:23:25.053326 | orchestrator |  ] 2026-02-14 03:23:25.053331 | orchestrator |  } 2026-02-14 03:23:25.053335 | orchestrator | } 2026-02-14 03:23:25.053340 | orchestrator | 2026-02-14 03:23:25.053344 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-14 03:23:25.053349 | orchestrator | Saturday 14 February 2026 03:23:22 +0000 (0:00:00.419) 0:00:13.479 ***** 2026-02-14 03:23:25.053354 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:25.053358 | orchestrator | 2026-02-14 03:23:25.053363 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-14 03:23:25.053367 | orchestrator | 2026-02-14 03:23:25.053372 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:23:25.053377 | orchestrator | Saturday 14 February 2026 03:23:24 +0000 (0:00:01.841) 0:00:15.321 ***** 2026-02-14 03:23:25.053381 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:25.053386 | orchestrator | 2026-02-14 03:23:25.053390 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:23:25.053395 | orchestrator | Saturday 14 February 2026 03:23:24 +0000 (0:00:00.258) 0:00:15.580 ***** 2026-02-14 03:23:25.053400 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:25.053404 | orchestrator | 2026-02-14 03:23:25.053412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353318 | orchestrator | Saturday 14 February 2026 03:23:25 +0000 (0:00:00.252) 0:00:15.832 ***** 2026-02-14 03:23:34.353437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-14 03:23:34.353453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-14 03:23:34.353465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-14 03:23:34.353493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-14 03:23:34.353505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-14 03:23:34.353516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-14 03:23:34.353527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-14 03:23:34.353538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-14 03:23:34.353549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-14 03:23:34.353560 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-14 03:23:34.353571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-14 03:23:34.353581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-14 03:23:34.353592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-14 03:23:34.353603 | orchestrator | 2026-02-14 03:23:34.353615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353626 | orchestrator | Saturday 14 February 2026 03:23:25 +0000 (0:00:00.391) 0:00:16.224 ***** 2026-02-14 03:23:34.353637 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353649 | orchestrator | 2026-02-14 03:23:34.353660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353671 | orchestrator | Saturday 14 February 2026 03:23:25 +0000 (0:00:00.216) 0:00:16.441 ***** 2026-02-14 03:23:34.353682 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353692 | orchestrator | 2026-02-14 03:23:34.353703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353714 | orchestrator | Saturday 14 February 2026 03:23:25 +0000 (0:00:00.214) 0:00:16.655 ***** 2026-02-14 03:23:34.353747 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353759 | orchestrator | 2026-02-14 03:23:34.353770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353781 | orchestrator | Saturday 14 February 2026 03:23:26 +0000 (0:00:00.205) 0:00:16.861 ***** 2026-02-14 03:23:34.353792 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353803 | orchestrator | 2026-02-14 03:23:34.353813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353824 | orchestrator | Saturday 14 February 2026 03:23:26 +0000 (0:00:00.612) 0:00:17.473 ***** 2026-02-14 03:23:34.353835 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353846 | orchestrator | 2026-02-14 03:23:34.353859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353871 | orchestrator | Saturday 14 February 2026 03:23:26 +0000 (0:00:00.204) 0:00:17.678 ***** 2026-02-14 03:23:34.353883 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353896 | orchestrator | 2026-02-14 03:23:34.353908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353919 | orchestrator | Saturday 14 February 2026 03:23:27 +0000 (0:00:00.229) 0:00:17.907 ***** 2026-02-14 03:23:34.353931 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353943 | orchestrator | 2026-02-14 03:23:34.353955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.353966 | orchestrator | Saturday 14 February 2026 03:23:27 +0000 (0:00:00.225) 0:00:18.132 ***** 2026-02-14 03:23:34.353978 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.353991 | orchestrator | 2026-02-14 03:23:34.354002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.354074 | orchestrator | Saturday 14 February 2026 03:23:27 +0000 (0:00:00.225) 0:00:18.358 ***** 2026-02-14 03:23:34.354088 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889) 2026-02-14 03:23:34.354102 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889) 2026-02-14 03:23:34.354114 | orchestrator | 2026-02-14 03:23:34.354127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.354139 | orchestrator | Saturday 14 February 2026 03:23:28 +0000 (0:00:00.471) 0:00:18.830 ***** 2026-02-14 03:23:34.354260 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc) 2026-02-14 03:23:34.354279 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc) 2026-02-14 03:23:34.354293 | orchestrator | 2026-02-14 03:23:34.354304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.354315 | orchestrator | Saturday 14 February 2026 03:23:28 +0000 (0:00:00.427) 0:00:19.257 ***** 2026-02-14 03:23:34.354326 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0) 2026-02-14 03:23:34.354337 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0) 2026-02-14 03:23:34.354348 | orchestrator | 2026-02-14 03:23:34.354359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.354389 | orchestrator | Saturday 14 February 2026 03:23:28 +0000 (0:00:00.462) 0:00:19.719 ***** 2026-02-14 03:23:34.354401 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e) 2026-02-14 03:23:34.354412 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e) 2026-02-14 03:23:34.354422 | orchestrator | 2026-02-14 03:23:34.354433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:34.354452 | orchestrator | Saturday 14 February 2026 03:23:29 +0000 (0:00:00.674) 0:00:20.394 ***** 2026-02-14 03:23:34.354463 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:23:34.354485 | orchestrator | 2026-02-14 03:23:34.354496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354506 | orchestrator | Saturday 14 February 2026 03:23:30 +0000 (0:00:00.611) 0:00:21.005 ***** 2026-02-14 03:23:34.354517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-14 03:23:34.354528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-14 03:23:34.354538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-14 03:23:34.354549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-14 03:23:34.354559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-14 03:23:34.354570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-14 03:23:34.354581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-14 03:23:34.354591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-14 03:23:34.354602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-14 03:23:34.354613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-14 03:23:34.354624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-14 03:23:34.354635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-14 03:23:34.354645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-14 03:23:34.354656 | orchestrator | 2026-02-14 03:23:34.354667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354678 | orchestrator | Saturday 14 February 2026 03:23:31 +0000 (0:00:00.848) 0:00:21.854 ***** 2026-02-14 03:23:34.354689 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354700 | orchestrator | 2026-02-14 03:23:34.354710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354721 | orchestrator | Saturday 14 February 2026 03:23:31 +0000 (0:00:00.208) 0:00:22.062 ***** 2026-02-14 03:23:34.354732 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354743 | orchestrator | 2026-02-14 03:23:34.354753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354764 | orchestrator | Saturday 14 February 2026 03:23:31 +0000 (0:00:00.206) 0:00:22.269 ***** 2026-02-14 03:23:34.354775 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354786 | orchestrator | 2026-02-14 03:23:34.354796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354807 | orchestrator | Saturday 14 February 2026 03:23:31 +0000 (0:00:00.203) 0:00:22.472 ***** 2026-02-14 03:23:34.354818 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354828 | orchestrator | 2026-02-14 03:23:34.354839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354850 | orchestrator | Saturday 14 February 2026 03:23:31 +0000 (0:00:00.225) 0:00:22.698 ***** 2026-02-14 03:23:34.354860 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354871 | orchestrator | 2026-02-14 03:23:34.354882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354893 | orchestrator | Saturday 14 February 2026 03:23:32 +0000 (0:00:00.246) 0:00:22.945 ***** 2026-02-14 03:23:34.354903 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354914 | orchestrator | 2026-02-14 03:23:34.354925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354936 | orchestrator | Saturday 14 February 2026 03:23:32 +0000 (0:00:00.207) 0:00:23.152 ***** 2026-02-14 03:23:34.354947 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.354964 | orchestrator | 2026-02-14 03:23:34.354975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.354986 | orchestrator | Saturday 14 February 2026 03:23:32 +0000 (0:00:00.226) 0:00:23.378 ***** 2026-02-14 03:23:34.354996 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:34.355007 | orchestrator | 2026-02-14 03:23:34.355018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.355029 | orchestrator | Saturday 14 February 2026 03:23:32 +0000 (0:00:00.205) 0:00:23.583 ***** 2026-02-14 03:23:34.355039 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-14 03:23:34.355051 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-14 03:23:34.355062 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-14 03:23:34.355073 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-14 03:23:34.355084 | orchestrator | 2026-02-14 03:23:34.355095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:34.355106 | orchestrator | Saturday 14 February 2026 03:23:33 +0000 (0:00:00.880) 0:00:24.464 ***** 2026-02-14 03:23:34.355117 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.564925 | orchestrator | 2026-02-14 03:23:40.565087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:40.565120 | orchestrator | Saturday 14 February 2026 03:23:34 +0000 (0:00:00.670) 0:00:25.134 ***** 2026-02-14 03:23:40.565153 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565207 | orchestrator | 2026-02-14 03:23:40.565220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:40.565232 | orchestrator | Saturday 14 February 2026 03:23:34 +0000 (0:00:00.221) 0:00:25.356 ***** 2026-02-14 03:23:40.565261 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565273 | orchestrator | 2026-02-14 03:23:40.565284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:40.565296 | orchestrator | Saturday 14 February 2026 03:23:34 +0000 (0:00:00.216) 0:00:25.572 ***** 2026-02-14 03:23:40.565307 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565318 | orchestrator | 2026-02-14 03:23:40.565329 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-14 03:23:40.565340 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.224) 0:00:25.797 ***** 2026-02-14 03:23:40.565351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-14 03:23:40.565362 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-14 03:23:40.565373 | orchestrator | 2026-02-14 03:23:40.565384 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-14 03:23:40.565395 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.195) 0:00:25.992 ***** 2026-02-14 03:23:40.565406 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565417 | orchestrator | 2026-02-14 03:23:40.565429 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-14 03:23:40.565439 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.162) 0:00:26.155 ***** 2026-02-14 03:23:40.565450 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565463 | orchestrator | 2026-02-14 03:23:40.565476 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-14 03:23:40.565488 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.166) 0:00:26.322 ***** 2026-02-14 03:23:40.565500 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565513 | orchestrator | 2026-02-14 03:23:40.565525 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-14 03:23:40.565537 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.148) 0:00:26.471 ***** 2026-02-14 03:23:40.565550 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:40.565563 | orchestrator | 2026-02-14 03:23:40.565575 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-14 03:23:40.565588 | orchestrator | Saturday 14 February 2026 03:23:35 +0000 (0:00:00.153) 0:00:26.624 ***** 2026-02-14 03:23:40.565623 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b577363-2bac-543e-944e-5354861b1af5'}}) 2026-02-14 03:23:40.565636 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df737486-1b51-5b4a-92b8-76d7a8957091'}}) 2026-02-14 03:23:40.565647 | orchestrator | 2026-02-14 03:23:40.565658 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-14 03:23:40.565669 | orchestrator | Saturday 14 February 2026 03:23:36 +0000 (0:00:00.178) 0:00:26.802 ***** 2026-02-14 03:23:40.565681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b577363-2bac-543e-944e-5354861b1af5'}})  2026-02-14 03:23:40.565693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df737486-1b51-5b4a-92b8-76d7a8957091'}})  2026-02-14 03:23:40.565704 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565715 | orchestrator | 2026-02-14 03:23:40.565726 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-14 03:23:40.565737 | orchestrator | Saturday 14 February 2026 03:23:36 +0000 (0:00:00.158) 0:00:26.961 ***** 2026-02-14 03:23:40.565748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b577363-2bac-543e-944e-5354861b1af5'}})  2026-02-14 03:23:40.565759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df737486-1b51-5b4a-92b8-76d7a8957091'}})  2026-02-14 03:23:40.565770 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565781 | orchestrator | 2026-02-14 03:23:40.565792 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-14 03:23:40.565803 | orchestrator | Saturday 14 February 2026 03:23:36 +0000 (0:00:00.378) 0:00:27.340 ***** 2026-02-14 03:23:40.565814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b577363-2bac-543e-944e-5354861b1af5'}})  2026-02-14 03:23:40.565825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df737486-1b51-5b4a-92b8-76d7a8957091'}})  2026-02-14 03:23:40.565836 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.565847 | orchestrator | 2026-02-14 03:23:40.565858 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-14 03:23:40.565869 | orchestrator | Saturday 14 February 2026 03:23:36 +0000 (0:00:00.162) 0:00:27.502 ***** 2026-02-14 03:23:40.565879 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:40.565891 | orchestrator | 2026-02-14 03:23:40.565902 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-14 03:23:40.565912 | orchestrator | Saturday 14 February 2026 03:23:36 +0000 (0:00:00.157) 0:00:27.660 ***** 2026-02-14 03:23:40.565923 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:23:40.565934 | orchestrator | 2026-02-14 03:23:40.565945 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-14 03:23:40.565957 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.166) 0:00:27.827 ***** 2026-02-14 03:23:40.565992 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566012 | orchestrator | 2026-02-14 03:23:40.566105 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-14 03:23:40.566119 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.165) 0:00:27.992 ***** 2026-02-14 03:23:40.566136 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566155 | orchestrator | 2026-02-14 03:23:40.566245 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-14 03:23:40.566266 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.143) 0:00:28.136 ***** 2026-02-14 03:23:40.566293 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566312 | orchestrator | 2026-02-14 03:23:40.566332 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-14 03:23:40.566352 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.143) 0:00:28.280 ***** 2026-02-14 03:23:40.566386 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:23:40.566404 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:40.566422 | orchestrator |  "sdb": { 2026-02-14 03:23:40.566434 | orchestrator |  "osd_lvm_uuid": "7b577363-2bac-543e-944e-5354861b1af5" 2026-02-14 03:23:40.566446 | orchestrator |  }, 2026-02-14 03:23:40.566457 | orchestrator |  "sdc": { 2026-02-14 03:23:40.566468 | orchestrator |  "osd_lvm_uuid": "df737486-1b51-5b4a-92b8-76d7a8957091" 2026-02-14 03:23:40.566479 | orchestrator |  } 2026-02-14 03:23:40.566489 | orchestrator |  } 2026-02-14 03:23:40.566501 | orchestrator | } 2026-02-14 03:23:40.566512 | orchestrator | 2026-02-14 03:23:40.566523 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-14 03:23:40.566534 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.163) 0:00:28.443 ***** 2026-02-14 03:23:40.566545 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566556 | orchestrator | 2026-02-14 03:23:40.566567 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-14 03:23:40.566577 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.138) 0:00:28.582 ***** 2026-02-14 03:23:40.566588 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566599 | orchestrator | 2026-02-14 03:23:40.566610 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-14 03:23:40.566621 | orchestrator | Saturday 14 February 2026 03:23:37 +0000 (0:00:00.144) 0:00:28.727 ***** 2026-02-14 03:23:40.566632 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:23:40.566642 | orchestrator | 2026-02-14 03:23:40.566653 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-14 03:23:40.566664 | orchestrator | Saturday 14 February 2026 03:23:38 +0000 (0:00:00.143) 0:00:28.870 ***** 2026-02-14 03:23:40.566675 | orchestrator | changed: [testbed-node-4] => { 2026-02-14 03:23:40.566686 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-14 03:23:40.566697 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:40.566708 | orchestrator |  "sdb": { 2026-02-14 03:23:40.566719 | orchestrator |  "osd_lvm_uuid": "7b577363-2bac-543e-944e-5354861b1af5" 2026-02-14 03:23:40.566729 | orchestrator |  }, 2026-02-14 03:23:40.566740 | orchestrator |  "sdc": { 2026-02-14 03:23:40.566751 | orchestrator |  "osd_lvm_uuid": "df737486-1b51-5b4a-92b8-76d7a8957091" 2026-02-14 03:23:40.566762 | orchestrator |  } 2026-02-14 03:23:40.566773 | orchestrator |  }, 2026-02-14 03:23:40.566784 | orchestrator |  "lvm_volumes": [ 2026-02-14 03:23:40.566795 | orchestrator |  { 2026-02-14 03:23:40.566806 | orchestrator |  "data": "osd-block-7b577363-2bac-543e-944e-5354861b1af5", 2026-02-14 03:23:40.566817 | orchestrator |  "data_vg": "ceph-7b577363-2bac-543e-944e-5354861b1af5" 2026-02-14 03:23:40.566828 | orchestrator |  }, 2026-02-14 03:23:40.566838 | orchestrator |  { 2026-02-14 03:23:40.566849 | orchestrator |  "data": "osd-block-df737486-1b51-5b4a-92b8-76d7a8957091", 2026-02-14 03:23:40.566860 | orchestrator |  "data_vg": "ceph-df737486-1b51-5b4a-92b8-76d7a8957091" 2026-02-14 03:23:40.566871 | orchestrator |  } 2026-02-14 03:23:40.566882 | orchestrator |  ] 2026-02-14 03:23:40.566893 | orchestrator |  } 2026-02-14 03:23:40.566904 | orchestrator | } 2026-02-14 03:23:40.566915 | orchestrator | 2026-02-14 03:23:40.566926 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-14 03:23:40.566937 | orchestrator | Saturday 14 February 2026 03:23:38 +0000 (0:00:00.412) 0:00:29.283 ***** 2026-02-14 03:23:40.566948 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:40.566959 | orchestrator | 2026-02-14 03:23:40.566970 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-14 03:23:40.566980 | orchestrator | 2026-02-14 03:23:40.566991 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:23:40.567002 | orchestrator | Saturday 14 February 2026 03:23:39 +0000 (0:00:01.164) 0:00:30.448 ***** 2026-02-14 03:23:40.567021 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:40.567032 | orchestrator | 2026-02-14 03:23:40.567043 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:23:40.567054 | orchestrator | Saturday 14 February 2026 03:23:39 +0000 (0:00:00.269) 0:00:30.717 ***** 2026-02-14 03:23:40.567064 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:40.567075 | orchestrator | 2026-02-14 03:23:40.567086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:40.567097 | orchestrator | Saturday 14 February 2026 03:23:40 +0000 (0:00:00.247) 0:00:30.965 ***** 2026-02-14 03:23:40.567108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-14 03:23:40.567119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-14 03:23:40.567130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-14 03:23:40.567140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-14 03:23:40.567151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-14 03:23:40.567212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-14 03:23:49.310755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-14 03:23:49.310870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-14 03:23:49.310888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-14 03:23:49.310901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-14 03:23:49.310930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-14 03:23:49.310941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-14 03:23:49.310953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-14 03:23:49.310964 | orchestrator | 2026-02-14 03:23:49.310976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.310988 | orchestrator | Saturday 14 February 2026 03:23:40 +0000 (0:00:00.378) 0:00:31.344 ***** 2026-02-14 03:23:49.310999 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311012 | orchestrator | 2026-02-14 03:23:49.311023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311034 | orchestrator | Saturday 14 February 2026 03:23:40 +0000 (0:00:00.204) 0:00:31.548 ***** 2026-02-14 03:23:49.311045 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311056 | orchestrator | 2026-02-14 03:23:49.311067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311078 | orchestrator | Saturday 14 February 2026 03:23:40 +0000 (0:00:00.200) 0:00:31.749 ***** 2026-02-14 03:23:49.311089 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311100 | orchestrator | 2026-02-14 03:23:49.311112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311123 | orchestrator | Saturday 14 February 2026 03:23:41 +0000 (0:00:00.199) 0:00:31.949 ***** 2026-02-14 03:23:49.311133 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311144 | orchestrator | 2026-02-14 03:23:49.311155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311167 | orchestrator | Saturday 14 February 2026 03:23:41 +0000 (0:00:00.620) 0:00:32.570 ***** 2026-02-14 03:23:49.311215 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311233 | orchestrator | 2026-02-14 03:23:49.311245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311256 | orchestrator | Saturday 14 February 2026 03:23:41 +0000 (0:00:00.216) 0:00:32.787 ***** 2026-02-14 03:23:49.311287 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311301 | orchestrator | 2026-02-14 03:23:49.311314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311326 | orchestrator | Saturday 14 February 2026 03:23:42 +0000 (0:00:00.208) 0:00:32.995 ***** 2026-02-14 03:23:49.311339 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311351 | orchestrator | 2026-02-14 03:23:49.311363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311376 | orchestrator | Saturday 14 February 2026 03:23:42 +0000 (0:00:00.219) 0:00:33.215 ***** 2026-02-14 03:23:49.311388 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311400 | orchestrator | 2026-02-14 03:23:49.311413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311425 | orchestrator | Saturday 14 February 2026 03:23:42 +0000 (0:00:00.210) 0:00:33.425 ***** 2026-02-14 03:23:49.311437 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397) 2026-02-14 03:23:49.311450 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397) 2026-02-14 03:23:49.311463 | orchestrator | 2026-02-14 03:23:49.311475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311488 | orchestrator | Saturday 14 February 2026 03:23:43 +0000 (0:00:00.467) 0:00:33.893 ***** 2026-02-14 03:23:49.311501 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48) 2026-02-14 03:23:49.311513 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48) 2026-02-14 03:23:49.311525 | orchestrator | 2026-02-14 03:23:49.311536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311547 | orchestrator | Saturday 14 February 2026 03:23:43 +0000 (0:00:00.444) 0:00:34.337 ***** 2026-02-14 03:23:49.311558 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40) 2026-02-14 03:23:49.311569 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40) 2026-02-14 03:23:49.311580 | orchestrator | 2026-02-14 03:23:49.311591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311601 | orchestrator | Saturday 14 February 2026 03:23:43 +0000 (0:00:00.438) 0:00:34.776 ***** 2026-02-14 03:23:49.311613 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67) 2026-02-14 03:23:49.311624 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67) 2026-02-14 03:23:49.311634 | orchestrator | 2026-02-14 03:23:49.311645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:23:49.311656 | orchestrator | Saturday 14 February 2026 03:23:44 +0000 (0:00:00.438) 0:00:35.214 ***** 2026-02-14 03:23:49.311667 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:23:49.311678 | orchestrator | 2026-02-14 03:23:49.311689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.311718 | orchestrator | Saturday 14 February 2026 03:23:44 +0000 (0:00:00.356) 0:00:35.571 ***** 2026-02-14 03:23:49.311730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-14 03:23:49.311741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-14 03:23:49.311752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-14 03:23:49.311768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-14 03:23:49.311780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-14 03:23:49.311791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-14 03:23:49.311809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-14 03:23:49.311820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-14 03:23:49.311831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-14 03:23:49.311842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-14 03:23:49.311852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-14 03:23:49.311863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-14 03:23:49.311874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-14 03:23:49.311885 | orchestrator | 2026-02-14 03:23:49.311896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.311907 | orchestrator | Saturday 14 February 2026 03:23:45 +0000 (0:00:00.593) 0:00:36.165 ***** 2026-02-14 03:23:49.311918 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311929 | orchestrator | 2026-02-14 03:23:49.311939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.311950 | orchestrator | Saturday 14 February 2026 03:23:45 +0000 (0:00:00.221) 0:00:36.387 ***** 2026-02-14 03:23:49.311961 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.311972 | orchestrator | 2026-02-14 03:23:49.311983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.311994 | orchestrator | Saturday 14 February 2026 03:23:45 +0000 (0:00:00.216) 0:00:36.603 ***** 2026-02-14 03:23:49.312005 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312016 | orchestrator | 2026-02-14 03:23:49.312027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312038 | orchestrator | Saturday 14 February 2026 03:23:46 +0000 (0:00:00.216) 0:00:36.819 ***** 2026-02-14 03:23:49.312049 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312060 | orchestrator | 2026-02-14 03:23:49.312071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312082 | orchestrator | Saturday 14 February 2026 03:23:46 +0000 (0:00:00.220) 0:00:37.040 ***** 2026-02-14 03:23:49.312093 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312104 | orchestrator | 2026-02-14 03:23:49.312115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312125 | orchestrator | Saturday 14 February 2026 03:23:46 +0000 (0:00:00.197) 0:00:37.237 ***** 2026-02-14 03:23:49.312136 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312147 | orchestrator | 2026-02-14 03:23:49.312158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312169 | orchestrator | Saturday 14 February 2026 03:23:46 +0000 (0:00:00.218) 0:00:37.455 ***** 2026-02-14 03:23:49.312200 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312211 | orchestrator | 2026-02-14 03:23:49.312223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312234 | orchestrator | Saturday 14 February 2026 03:23:46 +0000 (0:00:00.230) 0:00:37.686 ***** 2026-02-14 03:23:49.312245 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312256 | orchestrator | 2026-02-14 03:23:49.312267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312278 | orchestrator | Saturday 14 February 2026 03:23:47 +0000 (0:00:00.214) 0:00:37.900 ***** 2026-02-14 03:23:49.312289 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-14 03:23:49.312300 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-14 03:23:49.312312 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-14 03:23:49.312323 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-14 03:23:49.312334 | orchestrator | 2026-02-14 03:23:49.312351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312362 | orchestrator | Saturday 14 February 2026 03:23:47 +0000 (0:00:00.858) 0:00:38.759 ***** 2026-02-14 03:23:49.312373 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312384 | orchestrator | 2026-02-14 03:23:49.312395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312406 | orchestrator | Saturday 14 February 2026 03:23:48 +0000 (0:00:00.201) 0:00:38.961 ***** 2026-02-14 03:23:49.312417 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312428 | orchestrator | 2026-02-14 03:23:49.312439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312450 | orchestrator | Saturday 14 February 2026 03:23:48 +0000 (0:00:00.206) 0:00:39.167 ***** 2026-02-14 03:23:49.312461 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312472 | orchestrator | 2026-02-14 03:23:49.312483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:23:49.312494 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.685) 0:00:39.853 ***** 2026-02-14 03:23:49.312505 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:49.312516 | orchestrator | 2026-02-14 03:23:49.312533 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-14 03:23:53.536748 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.238) 0:00:40.092 ***** 2026-02-14 03:23:53.536855 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-14 03:23:53.536870 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-14 03:23:53.536882 | orchestrator | 2026-02-14 03:23:53.536895 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-14 03:23:53.536923 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.176) 0:00:40.268 ***** 2026-02-14 03:23:53.536935 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.536947 | orchestrator | 2026-02-14 03:23:53.536958 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-14 03:23:53.536969 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.155) 0:00:40.424 ***** 2026-02-14 03:23:53.536980 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.536991 | orchestrator | 2026-02-14 03:23:53.537002 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-14 03:23:53.537013 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.145) 0:00:40.569 ***** 2026-02-14 03:23:53.537023 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537034 | orchestrator | 2026-02-14 03:23:53.537045 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-14 03:23:53.537056 | orchestrator | Saturday 14 February 2026 03:23:49 +0000 (0:00:00.147) 0:00:40.716 ***** 2026-02-14 03:23:53.537067 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:53.537079 | orchestrator | 2026-02-14 03:23:53.537090 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-14 03:23:53.537101 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.140) 0:00:40.857 ***** 2026-02-14 03:23:53.537112 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1745485d-ab31-507e-930d-8d3ce82a0691'}}) 2026-02-14 03:23:53.537124 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7da5590-35e5-5703-96c8-37fe127c27f7'}}) 2026-02-14 03:23:53.537134 | orchestrator | 2026-02-14 03:23:53.537145 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-14 03:23:53.537156 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.173) 0:00:41.031 ***** 2026-02-14 03:23:53.537168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1745485d-ab31-507e-930d-8d3ce82a0691'}})  2026-02-14 03:23:53.537180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7da5590-35e5-5703-96c8-37fe127c27f7'}})  2026-02-14 03:23:53.537229 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537262 | orchestrator | 2026-02-14 03:23:53.537274 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-14 03:23:53.537285 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.153) 0:00:41.184 ***** 2026-02-14 03:23:53.537295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1745485d-ab31-507e-930d-8d3ce82a0691'}})  2026-02-14 03:23:53.537306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7da5590-35e5-5703-96c8-37fe127c27f7'}})  2026-02-14 03:23:53.537317 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537328 | orchestrator | 2026-02-14 03:23:53.537339 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-14 03:23:53.537350 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.174) 0:00:41.358 ***** 2026-02-14 03:23:53.537361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1745485d-ab31-507e-930d-8d3ce82a0691'}})  2026-02-14 03:23:53.537372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7da5590-35e5-5703-96c8-37fe127c27f7'}})  2026-02-14 03:23:53.537383 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537394 | orchestrator | 2026-02-14 03:23:53.537405 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-14 03:23:53.537415 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.159) 0:00:41.517 ***** 2026-02-14 03:23:53.537427 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:53.537437 | orchestrator | 2026-02-14 03:23:53.537448 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-14 03:23:53.537459 | orchestrator | Saturday 14 February 2026 03:23:50 +0000 (0:00:00.156) 0:00:41.674 ***** 2026-02-14 03:23:53.537470 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:23:53.537481 | orchestrator | 2026-02-14 03:23:53.537492 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-14 03:23:53.537503 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.352) 0:00:42.026 ***** 2026-02-14 03:23:53.537514 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537525 | orchestrator | 2026-02-14 03:23:53.537538 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-14 03:23:53.537557 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.150) 0:00:42.177 ***** 2026-02-14 03:23:53.537575 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537594 | orchestrator | 2026-02-14 03:23:53.537611 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-14 03:23:53.537630 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.137) 0:00:42.314 ***** 2026-02-14 03:23:53.537647 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537665 | orchestrator | 2026-02-14 03:23:53.537684 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-14 03:23:53.537703 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.144) 0:00:42.459 ***** 2026-02-14 03:23:53.537720 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:23:53.537739 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:53.537758 | orchestrator |  "sdb": { 2026-02-14 03:23:53.537799 | orchestrator |  "osd_lvm_uuid": "1745485d-ab31-507e-930d-8d3ce82a0691" 2026-02-14 03:23:53.537812 | orchestrator |  }, 2026-02-14 03:23:53.537823 | orchestrator |  "sdc": { 2026-02-14 03:23:53.537834 | orchestrator |  "osd_lvm_uuid": "f7da5590-35e5-5703-96c8-37fe127c27f7" 2026-02-14 03:23:53.537845 | orchestrator |  } 2026-02-14 03:23:53.537856 | orchestrator |  } 2026-02-14 03:23:53.537867 | orchestrator | } 2026-02-14 03:23:53.537878 | orchestrator | 2026-02-14 03:23:53.537897 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-14 03:23:53.537908 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.146) 0:00:42.605 ***** 2026-02-14 03:23:53.537919 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537939 | orchestrator | 2026-02-14 03:23:53.537950 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-14 03:23:53.537961 | orchestrator | Saturday 14 February 2026 03:23:51 +0000 (0:00:00.140) 0:00:42.746 ***** 2026-02-14 03:23:53.537972 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.537982 | orchestrator | 2026-02-14 03:23:53.537993 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-14 03:23:53.538004 | orchestrator | Saturday 14 February 2026 03:23:52 +0000 (0:00:00.154) 0:00:42.900 ***** 2026-02-14 03:23:53.538080 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:23:53.538093 | orchestrator | 2026-02-14 03:23:53.538104 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-14 03:23:53.538116 | orchestrator | Saturday 14 February 2026 03:23:52 +0000 (0:00:00.142) 0:00:43.043 ***** 2026-02-14 03:23:53.538126 | orchestrator | changed: [testbed-node-5] => { 2026-02-14 03:23:53.538137 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-14 03:23:53.538148 | orchestrator |  "ceph_osd_devices": { 2026-02-14 03:23:53.538159 | orchestrator |  "sdb": { 2026-02-14 03:23:53.538170 | orchestrator |  "osd_lvm_uuid": "1745485d-ab31-507e-930d-8d3ce82a0691" 2026-02-14 03:23:53.538181 | orchestrator |  }, 2026-02-14 03:23:53.538234 | orchestrator |  "sdc": { 2026-02-14 03:23:53.538246 | orchestrator |  "osd_lvm_uuid": "f7da5590-35e5-5703-96c8-37fe127c27f7" 2026-02-14 03:23:53.538257 | orchestrator |  } 2026-02-14 03:23:53.538268 | orchestrator |  }, 2026-02-14 03:23:53.538279 | orchestrator |  "lvm_volumes": [ 2026-02-14 03:23:53.538290 | orchestrator |  { 2026-02-14 03:23:53.538301 | orchestrator |  "data": "osd-block-1745485d-ab31-507e-930d-8d3ce82a0691", 2026-02-14 03:23:53.538312 | orchestrator |  "data_vg": "ceph-1745485d-ab31-507e-930d-8d3ce82a0691" 2026-02-14 03:23:53.538322 | orchestrator |  }, 2026-02-14 03:23:53.538333 | orchestrator |  { 2026-02-14 03:23:53.538344 | orchestrator |  "data": "osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7", 2026-02-14 03:23:53.538355 | orchestrator |  "data_vg": "ceph-f7da5590-35e5-5703-96c8-37fe127c27f7" 2026-02-14 03:23:53.538367 | orchestrator |  } 2026-02-14 03:23:53.538378 | orchestrator |  ] 2026-02-14 03:23:53.538389 | orchestrator |  } 2026-02-14 03:23:53.538400 | orchestrator | } 2026-02-14 03:23:53.538411 | orchestrator | 2026-02-14 03:23:53.538422 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-14 03:23:53.538433 | orchestrator | Saturday 14 February 2026 03:23:52 +0000 (0:00:00.240) 0:00:43.283 ***** 2026-02-14 03:23:53.538443 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-14 03:23:53.538454 | orchestrator | 2026-02-14 03:23:53.538465 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:23:53.538477 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:23:53.538490 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:23:53.538501 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:23:53.538512 | orchestrator | 2026-02-14 03:23:53.538523 | orchestrator | 2026-02-14 03:23:53.538534 | orchestrator | 2026-02-14 03:23:53.538545 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:23:53.538556 | orchestrator | Saturday 14 February 2026 03:23:53 +0000 (0:00:01.014) 0:00:44.297 ***** 2026-02-14 03:23:53.538567 | orchestrator | =============================================================================== 2026-02-14 03:23:53.538577 | orchestrator | Write configuration file ------------------------------------------------ 4.02s 2026-02-14 03:23:53.538597 | orchestrator | Add known partitions to the list of available block devices ------------- 1.83s 2026-02-14 03:23:53.538608 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-02-14 03:23:53.538619 | orchestrator | Print configuration data ------------------------------------------------ 1.07s 2026-02-14 03:23:53.538630 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-02-14 03:23:53.538640 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-02-14 03:23:53.538651 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-02-14 03:23:53.538662 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-02-14 03:23:53.538673 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-02-14 03:23:53.538684 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-02-14 03:23:53.538695 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2026-02-14 03:23:53.538706 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-14 03:23:53.538717 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-02-14 03:23:53.538737 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-14 03:23:53.943574 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-14 03:23:53.943694 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-02-14 03:23:53.943720 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.66s 2026-02-14 03:23:53.943765 | orchestrator | Set OSD devices config data --------------------------------------------- 0.66s 2026-02-14 03:23:53.943786 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-02-14 03:23:53.943806 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-14 03:24:16.542950 | orchestrator | 2026-02-14 03:24:16 | INFO  | Task 416fdf61-39b6-4688-9e43-ab6f60fe4478 (sync inventory) is running in background. Output coming soon. 2026-02-14 03:24:43.873939 | orchestrator | 2026-02-14 03:24:18 | INFO  | Starting group_vars file reorganization 2026-02-14 03:24:43.874177 | orchestrator | 2026-02-14 03:24:18 | INFO  | Moved 0 file(s) to their respective directories 2026-02-14 03:24:43.874197 | orchestrator | 2026-02-14 03:24:18 | INFO  | Group_vars file reorganization completed 2026-02-14 03:24:43.874209 | orchestrator | 2026-02-14 03:24:20 | INFO  | Starting variable preparation from inventory 2026-02-14 03:24:43.874220 | orchestrator | 2026-02-14 03:24:23 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-14 03:24:43.874232 | orchestrator | 2026-02-14 03:24:23 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-14 03:24:43.874243 | orchestrator | 2026-02-14 03:24:23 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-14 03:24:43.874254 | orchestrator | 2026-02-14 03:24:23 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-14 03:24:43.874265 | orchestrator | 2026-02-14 03:24:23 | INFO  | Variable preparation completed 2026-02-14 03:24:43.874276 | orchestrator | 2026-02-14 03:24:24 | INFO  | Starting inventory overwrite handling 2026-02-14 03:24:43.874287 | orchestrator | 2026-02-14 03:24:24 | INFO  | Handling group overwrites in 99-overwrite 2026-02-14 03:24:43.874298 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removing group frr:children from 60-generic 2026-02-14 03:24:43.874309 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-14 03:24:43.874395 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-14 03:24:43.874440 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-14 03:24:43.874452 | orchestrator | 2026-02-14 03:24:24 | INFO  | Handling group overwrites in 20-roles 2026-02-14 03:24:43.874463 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-14 03:24:43.874474 | orchestrator | 2026-02-14 03:24:24 | INFO  | Removed 5 group(s) in total 2026-02-14 03:24:43.874485 | orchestrator | 2026-02-14 03:24:24 | INFO  | Inventory overwrite handling completed 2026-02-14 03:24:43.874496 | orchestrator | 2026-02-14 03:24:26 | INFO  | Starting merge of inventory files 2026-02-14 03:24:43.874507 | orchestrator | 2026-02-14 03:24:26 | INFO  | Inventory files merged successfully 2026-02-14 03:24:43.874517 | orchestrator | 2026-02-14 03:24:31 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-14 03:24:43.874528 | orchestrator | 2026-02-14 03:24:42 | INFO  | Successfully wrote ClusterShell configuration 2026-02-14 03:24:43.874540 | orchestrator | [master ca422ce] 2026-02-14-03-24 2026-02-14 03:24:43.874552 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-14 03:24:46.228363 | orchestrator | 2026-02-14 03:24:46 | INFO  | Task 25f18c90-de73-400b-b018-40ea644cb628 (ceph-create-lvm-devices) was prepared for execution. 2026-02-14 03:24:46.228458 | orchestrator | 2026-02-14 03:24:46 | INFO  | It takes a moment until task 25f18c90-de73-400b-b018-40ea644cb628 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-14 03:24:58.040562 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 03:24:58.040679 | orchestrator | 2.16.14 2026-02-14 03:24:58.040695 | orchestrator | 2026-02-14 03:24:58.040708 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-14 03:24:58.040720 | orchestrator | 2026-02-14 03:24:58.040731 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:24:58.040743 | orchestrator | Saturday 14 February 2026 03:24:50 +0000 (0:00:00.314) 0:00:00.314 ***** 2026-02-14 03:24:58.040754 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 03:24:58.040765 | orchestrator | 2026-02-14 03:24:58.040776 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:24:58.040787 | orchestrator | Saturday 14 February 2026 03:24:50 +0000 (0:00:00.252) 0:00:00.566 ***** 2026-02-14 03:24:58.040798 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:24:58.040809 | orchestrator | 2026-02-14 03:24:58.040819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.040830 | orchestrator | Saturday 14 February 2026 03:24:51 +0000 (0:00:00.236) 0:00:00.803 ***** 2026-02-14 03:24:58.040841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-14 03:24:58.040852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-14 03:24:58.040879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-14 03:24:58.040891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-14 03:24:58.040902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-14 03:24:58.040912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-14 03:24:58.040923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-14 03:24:58.040934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-14 03:24:58.040972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-14 03:24:58.040984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-14 03:24:58.041018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-14 03:24:58.041029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-14 03:24:58.041040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-14 03:24:58.041051 | orchestrator | 2026-02-14 03:24:58.041062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041072 | orchestrator | Saturday 14 February 2026 03:24:51 +0000 (0:00:00.541) 0:00:01.344 ***** 2026-02-14 03:24:58.041083 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041094 | orchestrator | 2026-02-14 03:24:58.041105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041116 | orchestrator | Saturday 14 February 2026 03:24:51 +0000 (0:00:00.209) 0:00:01.553 ***** 2026-02-14 03:24:58.041126 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041137 | orchestrator | 2026-02-14 03:24:58.041147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041158 | orchestrator | Saturday 14 February 2026 03:24:52 +0000 (0:00:00.200) 0:00:01.754 ***** 2026-02-14 03:24:58.041169 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041179 | orchestrator | 2026-02-14 03:24:58.041190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041201 | orchestrator | Saturday 14 February 2026 03:24:52 +0000 (0:00:00.200) 0:00:01.954 ***** 2026-02-14 03:24:58.041211 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041222 | orchestrator | 2026-02-14 03:24:58.041232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041243 | orchestrator | Saturday 14 February 2026 03:24:52 +0000 (0:00:00.200) 0:00:02.155 ***** 2026-02-14 03:24:58.041254 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041264 | orchestrator | 2026-02-14 03:24:58.041275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041286 | orchestrator | Saturday 14 February 2026 03:24:52 +0000 (0:00:00.202) 0:00:02.358 ***** 2026-02-14 03:24:58.041297 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041307 | orchestrator | 2026-02-14 03:24:58.041318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041329 | orchestrator | Saturday 14 February 2026 03:24:52 +0000 (0:00:00.192) 0:00:02.551 ***** 2026-02-14 03:24:58.041339 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041350 | orchestrator | 2026-02-14 03:24:58.041361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041371 | orchestrator | Saturday 14 February 2026 03:24:53 +0000 (0:00:00.221) 0:00:02.772 ***** 2026-02-14 03:24:58.041382 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041392 | orchestrator | 2026-02-14 03:24:58.041403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041414 | orchestrator | Saturday 14 February 2026 03:24:53 +0000 (0:00:00.209) 0:00:02.982 ***** 2026-02-14 03:24:58.041424 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2) 2026-02-14 03:24:58.041436 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2) 2026-02-14 03:24:58.041447 | orchestrator | 2026-02-14 03:24:58.041458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041486 | orchestrator | Saturday 14 February 2026 03:24:53 +0000 (0:00:00.400) 0:00:03.383 ***** 2026-02-14 03:24:58.041498 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491) 2026-02-14 03:24:58.041509 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491) 2026-02-14 03:24:58.041519 | orchestrator | 2026-02-14 03:24:58.041530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041549 | orchestrator | Saturday 14 February 2026 03:24:54 +0000 (0:00:00.623) 0:00:04.006 ***** 2026-02-14 03:24:58.041560 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8) 2026-02-14 03:24:58.041570 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8) 2026-02-14 03:24:58.041581 | orchestrator | 2026-02-14 03:24:58.041592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041603 | orchestrator | Saturday 14 February 2026 03:24:54 +0000 (0:00:00.656) 0:00:04.662 ***** 2026-02-14 03:24:58.041614 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025) 2026-02-14 03:24:58.041630 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025) 2026-02-14 03:24:58.041641 | orchestrator | 2026-02-14 03:24:58.041652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:24:58.041664 | orchestrator | Saturday 14 February 2026 03:24:55 +0000 (0:00:00.867) 0:00:05.530 ***** 2026-02-14 03:24:58.041675 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:24:58.041685 | orchestrator | 2026-02-14 03:24:58.041696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.041707 | orchestrator | Saturday 14 February 2026 03:24:56 +0000 (0:00:00.336) 0:00:05.866 ***** 2026-02-14 03:24:58.041718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-14 03:24:58.041728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-14 03:24:58.041739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-14 03:24:58.041750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-14 03:24:58.041760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-14 03:24:58.041771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-14 03:24:58.041782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-14 03:24:58.041793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-14 03:24:58.041803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-14 03:24:58.041814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-14 03:24:58.041824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-14 03:24:58.041835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-14 03:24:58.041846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-14 03:24:58.041857 | orchestrator | 2026-02-14 03:24:58.041868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.041878 | orchestrator | Saturday 14 February 2026 03:24:56 +0000 (0:00:00.419) 0:00:06.286 ***** 2026-02-14 03:24:58.041889 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041900 | orchestrator | 2026-02-14 03:24:58.041911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.041921 | orchestrator | Saturday 14 February 2026 03:24:56 +0000 (0:00:00.209) 0:00:06.496 ***** 2026-02-14 03:24:58.041932 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.041943 | orchestrator | 2026-02-14 03:24:58.041969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.041980 | orchestrator | Saturday 14 February 2026 03:24:56 +0000 (0:00:00.208) 0:00:06.704 ***** 2026-02-14 03:24:58.041991 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.042008 | orchestrator | 2026-02-14 03:24:58.042084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.042096 | orchestrator | Saturday 14 February 2026 03:24:57 +0000 (0:00:00.202) 0:00:06.906 ***** 2026-02-14 03:24:58.042107 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.042118 | orchestrator | 2026-02-14 03:24:58.042128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.042139 | orchestrator | Saturday 14 February 2026 03:24:57 +0000 (0:00:00.214) 0:00:07.120 ***** 2026-02-14 03:24:58.042150 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.042160 | orchestrator | 2026-02-14 03:24:58.042171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.042182 | orchestrator | Saturday 14 February 2026 03:24:57 +0000 (0:00:00.213) 0:00:07.333 ***** 2026-02-14 03:24:58.042193 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.042203 | orchestrator | 2026-02-14 03:24:58.042214 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:24:58.042225 | orchestrator | Saturday 14 February 2026 03:24:57 +0000 (0:00:00.211) 0:00:07.545 ***** 2026-02-14 03:24:58.042235 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:24:58.042246 | orchestrator | 2026-02-14 03:24:58.042264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135224 | orchestrator | Saturday 14 February 2026 03:24:58 +0000 (0:00:00.204) 0:00:07.750 ***** 2026-02-14 03:25:06.135329 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135344 | orchestrator | 2026-02-14 03:25:06.135355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135366 | orchestrator | Saturday 14 February 2026 03:24:58 +0000 (0:00:00.652) 0:00:08.402 ***** 2026-02-14 03:25:06.135377 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-14 03:25:06.135387 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-14 03:25:06.135398 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-14 03:25:06.135407 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-14 03:25:06.135417 | orchestrator | 2026-02-14 03:25:06.135427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135437 | orchestrator | Saturday 14 February 2026 03:24:59 +0000 (0:00:00.658) 0:00:09.061 ***** 2026-02-14 03:25:06.135447 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135457 | orchestrator | 2026-02-14 03:25:06.135467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135476 | orchestrator | Saturday 14 February 2026 03:24:59 +0000 (0:00:00.196) 0:00:09.258 ***** 2026-02-14 03:25:06.135486 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135496 | orchestrator | 2026-02-14 03:25:06.135522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135533 | orchestrator | Saturday 14 February 2026 03:24:59 +0000 (0:00:00.188) 0:00:09.446 ***** 2026-02-14 03:25:06.135543 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135553 | orchestrator | 2026-02-14 03:25:06.135563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:06.135572 | orchestrator | Saturday 14 February 2026 03:24:59 +0000 (0:00:00.214) 0:00:09.660 ***** 2026-02-14 03:25:06.135582 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135592 | orchestrator | 2026-02-14 03:25:06.135602 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-14 03:25:06.135612 | orchestrator | Saturday 14 February 2026 03:25:00 +0000 (0:00:00.204) 0:00:09.865 ***** 2026-02-14 03:25:06.135622 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135631 | orchestrator | 2026-02-14 03:25:06.135641 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-14 03:25:06.135651 | orchestrator | Saturday 14 February 2026 03:25:00 +0000 (0:00:00.137) 0:00:10.002 ***** 2026-02-14 03:25:06.135662 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd74a1ea4-c27e-5375-be56-9d9a8e069fa6'}}) 2026-02-14 03:25:06.135693 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '86d1df08-738c-52e0-accb-8c0a21213af6'}}) 2026-02-14 03:25:06.135704 | orchestrator | 2026-02-14 03:25:06.135727 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-14 03:25:06.135739 | orchestrator | Saturday 14 February 2026 03:25:00 +0000 (0:00:00.199) 0:00:10.201 ***** 2026-02-14 03:25:06.135749 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}) 2026-02-14 03:25:06.135761 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}) 2026-02-14 03:25:06.135770 | orchestrator | 2026-02-14 03:25:06.135780 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-14 03:25:06.135790 | orchestrator | Saturday 14 February 2026 03:25:02 +0000 (0:00:01.896) 0:00:12.098 ***** 2026-02-14 03:25:06.135800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.135810 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.135820 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135830 | orchestrator | 2026-02-14 03:25:06.135840 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-14 03:25:06.135849 | orchestrator | Saturday 14 February 2026 03:25:02 +0000 (0:00:00.150) 0:00:12.249 ***** 2026-02-14 03:25:06.135859 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}) 2026-02-14 03:25:06.135869 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}) 2026-02-14 03:25:06.135878 | orchestrator | 2026-02-14 03:25:06.135888 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-14 03:25:06.135898 | orchestrator | Saturday 14 February 2026 03:25:04 +0000 (0:00:01.503) 0:00:13.752 ***** 2026-02-14 03:25:06.135932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.135943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.135952 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.135962 | orchestrator | 2026-02-14 03:25:06.135972 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-14 03:25:06.135981 | orchestrator | Saturday 14 February 2026 03:25:04 +0000 (0:00:00.163) 0:00:13.916 ***** 2026-02-14 03:25:06.136006 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136017 | orchestrator | 2026-02-14 03:25:06.136027 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-14 03:25:06.136036 | orchestrator | Saturday 14 February 2026 03:25:04 +0000 (0:00:00.371) 0:00:14.287 ***** 2026-02-14 03:25:06.136046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136056 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136066 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136076 | orchestrator | 2026-02-14 03:25:06.136085 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-14 03:25:06.136095 | orchestrator | Saturday 14 February 2026 03:25:04 +0000 (0:00:00.167) 0:00:14.454 ***** 2026-02-14 03:25:06.136112 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136122 | orchestrator | 2026-02-14 03:25:06.136132 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-14 03:25:06.136141 | orchestrator | Saturday 14 February 2026 03:25:04 +0000 (0:00:00.151) 0:00:14.606 ***** 2026-02-14 03:25:06.136156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136177 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136186 | orchestrator | 2026-02-14 03:25:06.136196 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-14 03:25:06.136205 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.165) 0:00:14.772 ***** 2026-02-14 03:25:06.136215 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136224 | orchestrator | 2026-02-14 03:25:06.136234 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-14 03:25:06.136243 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.136) 0:00:14.908 ***** 2026-02-14 03:25:06.136253 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136272 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136282 | orchestrator | 2026-02-14 03:25:06.136292 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-14 03:25:06.136301 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.165) 0:00:15.073 ***** 2026-02-14 03:25:06.136311 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:06.136321 | orchestrator | 2026-02-14 03:25:06.136330 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-14 03:25:06.136340 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.148) 0:00:15.222 ***** 2026-02-14 03:25:06.136350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136369 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136378 | orchestrator | 2026-02-14 03:25:06.136388 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-14 03:25:06.136398 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.156) 0:00:15.379 ***** 2026-02-14 03:25:06.136407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136426 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136436 | orchestrator | 2026-02-14 03:25:06.136446 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-14 03:25:06.136455 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.169) 0:00:15.548 ***** 2026-02-14 03:25:06.136465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:06.136475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:06.136490 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136500 | orchestrator | 2026-02-14 03:25:06.136510 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-14 03:25:06.136519 | orchestrator | Saturday 14 February 2026 03:25:05 +0000 (0:00:00.164) 0:00:15.713 ***** 2026-02-14 03:25:06.136529 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:06.136538 | orchestrator | 2026-02-14 03:25:06.136548 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-14 03:25:06.136563 | orchestrator | Saturday 14 February 2026 03:25:06 +0000 (0:00:00.136) 0:00:15.849 ***** 2026-02-14 03:25:12.996989 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.997124 | orchestrator | 2026-02-14 03:25:12.997153 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-14 03:25:12.997176 | orchestrator | Saturday 14 February 2026 03:25:06 +0000 (0:00:00.144) 0:00:15.993 ***** 2026-02-14 03:25:12.997196 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.997218 | orchestrator | 2026-02-14 03:25:12.997236 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-14 03:25:12.997255 | orchestrator | Saturday 14 February 2026 03:25:06 +0000 (0:00:00.380) 0:00:16.374 ***** 2026-02-14 03:25:12.997275 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:25:12.997296 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-14 03:25:12.997317 | orchestrator | } 2026-02-14 03:25:12.997337 | orchestrator | 2026-02-14 03:25:12.997349 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-14 03:25:12.997360 | orchestrator | Saturday 14 February 2026 03:25:06 +0000 (0:00:00.145) 0:00:16.519 ***** 2026-02-14 03:25:12.997371 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:25:12.997382 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-14 03:25:12.997393 | orchestrator | } 2026-02-14 03:25:12.997407 | orchestrator | 2026-02-14 03:25:12.997420 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-14 03:25:12.997451 | orchestrator | Saturday 14 February 2026 03:25:06 +0000 (0:00:00.146) 0:00:16.666 ***** 2026-02-14 03:25:12.997464 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:25:12.997476 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-14 03:25:12.997489 | orchestrator | } 2026-02-14 03:25:12.997501 | orchestrator | 2026-02-14 03:25:12.997514 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-14 03:25:12.997526 | orchestrator | Saturday 14 February 2026 03:25:07 +0000 (0:00:00.145) 0:00:16.812 ***** 2026-02-14 03:25:12.997542 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:12.997562 | orchestrator | 2026-02-14 03:25:12.997579 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-14 03:25:12.997597 | orchestrator | Saturday 14 February 2026 03:25:07 +0000 (0:00:00.695) 0:00:17.507 ***** 2026-02-14 03:25:12.997616 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:12.997636 | orchestrator | 2026-02-14 03:25:12.997653 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-14 03:25:12.997672 | orchestrator | Saturday 14 February 2026 03:25:08 +0000 (0:00:00.578) 0:00:18.086 ***** 2026-02-14 03:25:12.997690 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:12.997709 | orchestrator | 2026-02-14 03:25:12.997728 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-14 03:25:12.997747 | orchestrator | Saturday 14 February 2026 03:25:08 +0000 (0:00:00.549) 0:00:18.636 ***** 2026-02-14 03:25:12.997768 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:12.997786 | orchestrator | 2026-02-14 03:25:12.997803 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-14 03:25:12.997815 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.158) 0:00:18.794 ***** 2026-02-14 03:25:12.997826 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.997837 | orchestrator | 2026-02-14 03:25:12.997848 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-14 03:25:12.997918 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.116) 0:00:18.911 ***** 2026-02-14 03:25:12.997932 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.997943 | orchestrator | 2026-02-14 03:25:12.997954 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-14 03:25:12.997965 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.116) 0:00:19.028 ***** 2026-02-14 03:25:12.997976 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:25:12.997987 | orchestrator |  "vgs_report": { 2026-02-14 03:25:12.997999 | orchestrator |  "vg": [] 2026-02-14 03:25:12.998010 | orchestrator |  } 2026-02-14 03:25:12.998084 | orchestrator | } 2026-02-14 03:25:12.998096 | orchestrator | 2026-02-14 03:25:12.998110 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-14 03:25:12.998143 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.170) 0:00:19.198 ***** 2026-02-14 03:25:12.998162 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998256 | orchestrator | 2026-02-14 03:25:12.998276 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-14 03:25:12.998294 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.142) 0:00:19.341 ***** 2026-02-14 03:25:12.998312 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998332 | orchestrator | 2026-02-14 03:25:12.998352 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-14 03:25:12.998369 | orchestrator | Saturday 14 February 2026 03:25:09 +0000 (0:00:00.345) 0:00:19.687 ***** 2026-02-14 03:25:12.998387 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998402 | orchestrator | 2026-02-14 03:25:12.998413 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-14 03:25:12.998424 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.130) 0:00:19.818 ***** 2026-02-14 03:25:12.998435 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998446 | orchestrator | 2026-02-14 03:25:12.998457 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-14 03:25:12.998468 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.138) 0:00:19.956 ***** 2026-02-14 03:25:12.998479 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998489 | orchestrator | 2026-02-14 03:25:12.998500 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-14 03:25:12.998511 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.179) 0:00:20.136 ***** 2026-02-14 03:25:12.998522 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998533 | orchestrator | 2026-02-14 03:25:12.998544 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-14 03:25:12.998555 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.148) 0:00:20.284 ***** 2026-02-14 03:25:12.998565 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998576 | orchestrator | 2026-02-14 03:25:12.998587 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-14 03:25:12.998598 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.159) 0:00:20.444 ***** 2026-02-14 03:25:12.998631 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998643 | orchestrator | 2026-02-14 03:25:12.998654 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-14 03:25:12.998665 | orchestrator | Saturday 14 February 2026 03:25:10 +0000 (0:00:00.162) 0:00:20.607 ***** 2026-02-14 03:25:12.998676 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998687 | orchestrator | 2026-02-14 03:25:12.998698 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-14 03:25:12.998709 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.160) 0:00:20.768 ***** 2026-02-14 03:25:12.998720 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998731 | orchestrator | 2026-02-14 03:25:12.998742 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-14 03:25:12.998753 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.142) 0:00:20.910 ***** 2026-02-14 03:25:12.998775 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998789 | orchestrator | 2026-02-14 03:25:12.998807 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-14 03:25:12.998826 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.152) 0:00:21.062 ***** 2026-02-14 03:25:12.998856 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998915 | orchestrator | 2026-02-14 03:25:12.998950 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-14 03:25:12.998969 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.138) 0:00:21.201 ***** 2026-02-14 03:25:12.998986 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.998997 | orchestrator | 2026-02-14 03:25:12.999008 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-14 03:25:12.999019 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.144) 0:00:21.346 ***** 2026-02-14 03:25:12.999030 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999041 | orchestrator | 2026-02-14 03:25:12.999052 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-14 03:25:12.999063 | orchestrator | Saturday 14 February 2026 03:25:11 +0000 (0:00:00.378) 0:00:21.724 ***** 2026-02-14 03:25:12.999076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:12.999088 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:12.999099 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999110 | orchestrator | 2026-02-14 03:25:12.999122 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-14 03:25:12.999133 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.157) 0:00:21.882 ***** 2026-02-14 03:25:12.999144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:12.999158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:12.999175 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999194 | orchestrator | 2026-02-14 03:25:12.999212 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-14 03:25:12.999229 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.159) 0:00:22.042 ***** 2026-02-14 03:25:12.999248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:12.999259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:12.999270 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999281 | orchestrator | 2026-02-14 03:25:12.999292 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-14 03:25:12.999304 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.166) 0:00:22.208 ***** 2026-02-14 03:25:12.999323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:12.999341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:12.999359 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999378 | orchestrator | 2026-02-14 03:25:12.999398 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-14 03:25:12.999416 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.163) 0:00:22.372 ***** 2026-02-14 03:25:12.999444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:12.999455 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:12.999466 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:12.999477 | orchestrator | 2026-02-14 03:25:12.999488 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-14 03:25:12.999499 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.168) 0:00:22.540 ***** 2026-02-14 03:25:12.999522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.472603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.472739 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.472768 | orchestrator | 2026-02-14 03:25:18.472790 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-14 03:25:18.472812 | orchestrator | Saturday 14 February 2026 03:25:12 +0000 (0:00:00.172) 0:00:22.712 ***** 2026-02-14 03:25:18.472834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.472888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.472908 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.472926 | orchestrator | 2026-02-14 03:25:18.472966 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-14 03:25:18.472986 | orchestrator | Saturday 14 February 2026 03:25:13 +0000 (0:00:00.162) 0:00:22.875 ***** 2026-02-14 03:25:18.473004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.473023 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.473041 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.473059 | orchestrator | 2026-02-14 03:25:18.473078 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-14 03:25:18.473097 | orchestrator | Saturday 14 February 2026 03:25:13 +0000 (0:00:00.165) 0:00:23.040 ***** 2026-02-14 03:25:18.473116 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:18.473137 | orchestrator | 2026-02-14 03:25:18.473157 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-14 03:25:18.473176 | orchestrator | Saturday 14 February 2026 03:25:13 +0000 (0:00:00.609) 0:00:23.650 ***** 2026-02-14 03:25:18.473194 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:18.473213 | orchestrator | 2026-02-14 03:25:18.473232 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-14 03:25:18.473251 | orchestrator | Saturday 14 February 2026 03:25:14 +0000 (0:00:00.542) 0:00:24.192 ***** 2026-02-14 03:25:18.473268 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:25:18.473286 | orchestrator | 2026-02-14 03:25:18.473304 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-14 03:25:18.473324 | orchestrator | Saturday 14 February 2026 03:25:14 +0000 (0:00:00.152) 0:00:24.344 ***** 2026-02-14 03:25:18.473346 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'vg_name': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}) 2026-02-14 03:25:18.473367 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'vg_name': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}) 2026-02-14 03:25:18.473415 | orchestrator | 2026-02-14 03:25:18.473435 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-14 03:25:18.473452 | orchestrator | Saturday 14 February 2026 03:25:14 +0000 (0:00:00.187) 0:00:24.532 ***** 2026-02-14 03:25:18.473469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.473485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.473501 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.473519 | orchestrator | 2026-02-14 03:25:18.473536 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-14 03:25:18.473554 | orchestrator | Saturday 14 February 2026 03:25:15 +0000 (0:00:00.417) 0:00:24.950 ***** 2026-02-14 03:25:18.473571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.473589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.473608 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.473626 | orchestrator | 2026-02-14 03:25:18.473644 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-14 03:25:18.473660 | orchestrator | Saturday 14 February 2026 03:25:15 +0000 (0:00:00.151) 0:00:25.102 ***** 2026-02-14 03:25:18.473671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 03:25:18.473682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 03:25:18.473693 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:25:18.473704 | orchestrator | 2026-02-14 03:25:18.473714 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-14 03:25:18.473725 | orchestrator | Saturday 14 February 2026 03:25:15 +0000 (0:00:00.174) 0:00:25.276 ***** 2026-02-14 03:25:18.473759 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:25:18.473771 | orchestrator |  "lvm_report": { 2026-02-14 03:25:18.473783 | orchestrator |  "lv": [ 2026-02-14 03:25:18.473793 | orchestrator |  { 2026-02-14 03:25:18.473804 | orchestrator |  "lv_name": "osd-block-86d1df08-738c-52e0-accb-8c0a21213af6", 2026-02-14 03:25:18.473816 | orchestrator |  "vg_name": "ceph-86d1df08-738c-52e0-accb-8c0a21213af6" 2026-02-14 03:25:18.473827 | orchestrator |  }, 2026-02-14 03:25:18.473838 | orchestrator |  { 2026-02-14 03:25:18.473884 | orchestrator |  "lv_name": "osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6", 2026-02-14 03:25:18.473903 | orchestrator |  "vg_name": "ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6" 2026-02-14 03:25:18.473923 | orchestrator |  } 2026-02-14 03:25:18.473940 | orchestrator |  ], 2026-02-14 03:25:18.473957 | orchestrator |  "pv": [ 2026-02-14 03:25:18.473969 | orchestrator |  { 2026-02-14 03:25:18.473979 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-14 03:25:18.473990 | orchestrator |  "vg_name": "ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6" 2026-02-14 03:25:18.474001 | orchestrator |  }, 2026-02-14 03:25:18.474011 | orchestrator |  { 2026-02-14 03:25:18.474091 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-14 03:25:18.474104 | orchestrator |  "vg_name": "ceph-86d1df08-738c-52e0-accb-8c0a21213af6" 2026-02-14 03:25:18.474115 | orchestrator |  } 2026-02-14 03:25:18.474126 | orchestrator |  ] 2026-02-14 03:25:18.474136 | orchestrator |  } 2026-02-14 03:25:18.474148 | orchestrator | } 2026-02-14 03:25:18.474171 | orchestrator | 2026-02-14 03:25:18.474182 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-14 03:25:18.474193 | orchestrator | 2026-02-14 03:25:18.474204 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:25:18.474215 | orchestrator | Saturday 14 February 2026 03:25:15 +0000 (0:00:00.295) 0:00:25.571 ***** 2026-02-14 03:25:18.474226 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-14 03:25:18.474237 | orchestrator | 2026-02-14 03:25:18.474247 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:25:18.474258 | orchestrator | Saturday 14 February 2026 03:25:16 +0000 (0:00:00.264) 0:00:25.835 ***** 2026-02-14 03:25:18.474269 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:18.474279 | orchestrator | 2026-02-14 03:25:18.474289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474299 | orchestrator | Saturday 14 February 2026 03:25:16 +0000 (0:00:00.240) 0:00:26.076 ***** 2026-02-14 03:25:18.474308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-14 03:25:18.474318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-14 03:25:18.474327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-14 03:25:18.474337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-14 03:25:18.474346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-14 03:25:18.474356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-14 03:25:18.474365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-14 03:25:18.474374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-14 03:25:18.474384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-14 03:25:18.474393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-14 03:25:18.474402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-14 03:25:18.474412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-14 03:25:18.474421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-14 03:25:18.474431 | orchestrator | 2026-02-14 03:25:18.474440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474450 | orchestrator | Saturday 14 February 2026 03:25:16 +0000 (0:00:00.450) 0:00:26.527 ***** 2026-02-14 03:25:18.474459 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474468 | orchestrator | 2026-02-14 03:25:18.474478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474487 | orchestrator | Saturday 14 February 2026 03:25:17 +0000 (0:00:00.203) 0:00:26.730 ***** 2026-02-14 03:25:18.474497 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474506 | orchestrator | 2026-02-14 03:25:18.474516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474525 | orchestrator | Saturday 14 February 2026 03:25:17 +0000 (0:00:00.590) 0:00:27.320 ***** 2026-02-14 03:25:18.474535 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474544 | orchestrator | 2026-02-14 03:25:18.474554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474563 | orchestrator | Saturday 14 February 2026 03:25:17 +0000 (0:00:00.207) 0:00:27.527 ***** 2026-02-14 03:25:18.474573 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474582 | orchestrator | 2026-02-14 03:25:18.474592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474601 | orchestrator | Saturday 14 February 2026 03:25:18 +0000 (0:00:00.228) 0:00:27.756 ***** 2026-02-14 03:25:18.474617 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474626 | orchestrator | 2026-02-14 03:25:18.474636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:18.474645 | orchestrator | Saturday 14 February 2026 03:25:18 +0000 (0:00:00.213) 0:00:27.969 ***** 2026-02-14 03:25:18.474655 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:18.474664 | orchestrator | 2026-02-14 03:25:18.474683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847372 | orchestrator | Saturday 14 February 2026 03:25:18 +0000 (0:00:00.217) 0:00:28.187 ***** 2026-02-14 03:25:29.847489 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.847506 | orchestrator | 2026-02-14 03:25:29.847519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847530 | orchestrator | Saturday 14 February 2026 03:25:18 +0000 (0:00:00.223) 0:00:28.410 ***** 2026-02-14 03:25:29.847541 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.847553 | orchestrator | 2026-02-14 03:25:29.847563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847574 | orchestrator | Saturday 14 February 2026 03:25:18 +0000 (0:00:00.221) 0:00:28.632 ***** 2026-02-14 03:25:29.847585 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889) 2026-02-14 03:25:29.847597 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889) 2026-02-14 03:25:29.847608 | orchestrator | 2026-02-14 03:25:29.847634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847646 | orchestrator | Saturday 14 February 2026 03:25:19 +0000 (0:00:00.445) 0:00:29.077 ***** 2026-02-14 03:25:29.847657 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc) 2026-02-14 03:25:29.847668 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc) 2026-02-14 03:25:29.847679 | orchestrator | 2026-02-14 03:25:29.847690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847701 | orchestrator | Saturday 14 February 2026 03:25:19 +0000 (0:00:00.474) 0:00:29.552 ***** 2026-02-14 03:25:29.847711 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0) 2026-02-14 03:25:29.847722 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0) 2026-02-14 03:25:29.847733 | orchestrator | 2026-02-14 03:25:29.847744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847755 | orchestrator | Saturday 14 February 2026 03:25:20 +0000 (0:00:00.678) 0:00:30.230 ***** 2026-02-14 03:25:29.847766 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e) 2026-02-14 03:25:29.847776 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e) 2026-02-14 03:25:29.847857 | orchestrator | 2026-02-14 03:25:29.847873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:29.847885 | orchestrator | Saturday 14 February 2026 03:25:21 +0000 (0:00:00.920) 0:00:31.151 ***** 2026-02-14 03:25:29.847896 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:25:29.847908 | orchestrator | 2026-02-14 03:25:29.847920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.847933 | orchestrator | Saturday 14 February 2026 03:25:21 +0000 (0:00:00.381) 0:00:31.533 ***** 2026-02-14 03:25:29.847945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-14 03:25:29.847958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-14 03:25:29.847971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-14 03:25:29.848004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-14 03:25:29.848017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-14 03:25:29.848029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-14 03:25:29.848040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-14 03:25:29.848051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-14 03:25:29.848062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-14 03:25:29.848072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-14 03:25:29.848083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-14 03:25:29.848094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-14 03:25:29.848105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-14 03:25:29.848115 | orchestrator | 2026-02-14 03:25:29.848126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848137 | orchestrator | Saturday 14 February 2026 03:25:22 +0000 (0:00:00.433) 0:00:31.966 ***** 2026-02-14 03:25:29.848148 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848159 | orchestrator | 2026-02-14 03:25:29.848170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848181 | orchestrator | Saturday 14 February 2026 03:25:22 +0000 (0:00:00.215) 0:00:32.182 ***** 2026-02-14 03:25:29.848192 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848203 | orchestrator | 2026-02-14 03:25:29.848213 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848224 | orchestrator | Saturday 14 February 2026 03:25:22 +0000 (0:00:00.211) 0:00:32.393 ***** 2026-02-14 03:25:29.848235 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848246 | orchestrator | 2026-02-14 03:25:29.848274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848286 | orchestrator | Saturday 14 February 2026 03:25:22 +0000 (0:00:00.204) 0:00:32.597 ***** 2026-02-14 03:25:29.848298 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848309 | orchestrator | 2026-02-14 03:25:29.848320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848331 | orchestrator | Saturday 14 February 2026 03:25:23 +0000 (0:00:00.207) 0:00:32.805 ***** 2026-02-14 03:25:29.848342 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848353 | orchestrator | 2026-02-14 03:25:29.848364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848376 | orchestrator | Saturday 14 February 2026 03:25:23 +0000 (0:00:00.225) 0:00:33.031 ***** 2026-02-14 03:25:29.848386 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848397 | orchestrator | 2026-02-14 03:25:29.848408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848419 | orchestrator | Saturday 14 February 2026 03:25:23 +0000 (0:00:00.212) 0:00:33.243 ***** 2026-02-14 03:25:29.848436 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848448 | orchestrator | 2026-02-14 03:25:29.848459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848470 | orchestrator | Saturday 14 February 2026 03:25:23 +0000 (0:00:00.215) 0:00:33.459 ***** 2026-02-14 03:25:29.848480 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848491 | orchestrator | 2026-02-14 03:25:29.848502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848513 | orchestrator | Saturday 14 February 2026 03:25:24 +0000 (0:00:00.672) 0:00:34.132 ***** 2026-02-14 03:25:29.848524 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-14 03:25:29.848542 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-14 03:25:29.848554 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-14 03:25:29.848565 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-14 03:25:29.848575 | orchestrator | 2026-02-14 03:25:29.848586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848597 | orchestrator | Saturday 14 February 2026 03:25:25 +0000 (0:00:00.712) 0:00:34.845 ***** 2026-02-14 03:25:29.848608 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848618 | orchestrator | 2026-02-14 03:25:29.848630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848640 | orchestrator | Saturday 14 February 2026 03:25:25 +0000 (0:00:00.210) 0:00:35.055 ***** 2026-02-14 03:25:29.848651 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848662 | orchestrator | 2026-02-14 03:25:29.848673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848684 | orchestrator | Saturday 14 February 2026 03:25:25 +0000 (0:00:00.222) 0:00:35.278 ***** 2026-02-14 03:25:29.848695 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848705 | orchestrator | 2026-02-14 03:25:29.848716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:29.848727 | orchestrator | Saturday 14 February 2026 03:25:25 +0000 (0:00:00.192) 0:00:35.471 ***** 2026-02-14 03:25:29.848738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848749 | orchestrator | 2026-02-14 03:25:29.848760 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-14 03:25:29.848770 | orchestrator | Saturday 14 February 2026 03:25:25 +0000 (0:00:00.226) 0:00:35.697 ***** 2026-02-14 03:25:29.848781 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848811 | orchestrator | 2026-02-14 03:25:29.848822 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-14 03:25:29.848833 | orchestrator | Saturday 14 February 2026 03:25:26 +0000 (0:00:00.135) 0:00:35.833 ***** 2026-02-14 03:25:29.848844 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b577363-2bac-543e-944e-5354861b1af5'}}) 2026-02-14 03:25:29.848855 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df737486-1b51-5b4a-92b8-76d7a8957091'}}) 2026-02-14 03:25:29.848866 | orchestrator | 2026-02-14 03:25:29.848877 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-14 03:25:29.848888 | orchestrator | Saturday 14 February 2026 03:25:26 +0000 (0:00:00.195) 0:00:36.029 ***** 2026-02-14 03:25:29.848899 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}) 2026-02-14 03:25:29.848911 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}) 2026-02-14 03:25:29.848922 | orchestrator | 2026-02-14 03:25:29.848934 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-14 03:25:29.848944 | orchestrator | Saturday 14 February 2026 03:25:28 +0000 (0:00:01.892) 0:00:37.922 ***** 2026-02-14 03:25:29.848955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:29.848967 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:29.848978 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:29.848990 | orchestrator | 2026-02-14 03:25:29.849000 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-14 03:25:29.849012 | orchestrator | Saturday 14 February 2026 03:25:28 +0000 (0:00:00.218) 0:00:38.141 ***** 2026-02-14 03:25:29.849023 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}) 2026-02-14 03:25:29.849048 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}) 2026-02-14 03:25:35.897350 | orchestrator | 2026-02-14 03:25:35.897463 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-14 03:25:35.897481 | orchestrator | Saturday 14 February 2026 03:25:29 +0000 (0:00:01.409) 0:00:39.550 ***** 2026-02-14 03:25:35.897494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.897507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.897518 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897530 | orchestrator | 2026-02-14 03:25:35.897558 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-14 03:25:35.897569 | orchestrator | Saturday 14 February 2026 03:25:30 +0000 (0:00:00.384) 0:00:39.935 ***** 2026-02-14 03:25:35.897580 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897591 | orchestrator | 2026-02-14 03:25:35.897602 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-14 03:25:35.897613 | orchestrator | Saturday 14 February 2026 03:25:30 +0000 (0:00:00.150) 0:00:40.085 ***** 2026-02-14 03:25:35.897624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.897635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.897647 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897658 | orchestrator | 2026-02-14 03:25:35.897668 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-14 03:25:35.897679 | orchestrator | Saturday 14 February 2026 03:25:30 +0000 (0:00:00.165) 0:00:40.251 ***** 2026-02-14 03:25:35.897690 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897701 | orchestrator | 2026-02-14 03:25:35.897712 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-14 03:25:35.897723 | orchestrator | Saturday 14 February 2026 03:25:30 +0000 (0:00:00.152) 0:00:40.404 ***** 2026-02-14 03:25:35.897734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.897745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.897756 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897831 | orchestrator | 2026-02-14 03:25:35.897845 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-14 03:25:35.897856 | orchestrator | Saturday 14 February 2026 03:25:30 +0000 (0:00:00.164) 0:00:40.568 ***** 2026-02-14 03:25:35.897873 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.897892 | orchestrator | 2026-02-14 03:25:35.897911 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-14 03:25:35.897929 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.175) 0:00:40.744 ***** 2026-02-14 03:25:35.897949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.897966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.897985 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898001 | orchestrator | 2026-02-14 03:25:35.898092 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-14 03:25:35.898144 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.174) 0:00:40.918 ***** 2026-02-14 03:25:35.898164 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:35.898183 | orchestrator | 2026-02-14 03:25:35.898202 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-14 03:25:35.898220 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.150) 0:00:41.069 ***** 2026-02-14 03:25:35.898240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.898252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.898263 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898273 | orchestrator | 2026-02-14 03:25:35.898284 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-14 03:25:35.898295 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.181) 0:00:41.251 ***** 2026-02-14 03:25:35.898305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.898316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.898327 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898337 | orchestrator | 2026-02-14 03:25:35.898348 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-14 03:25:35.898382 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.164) 0:00:41.415 ***** 2026-02-14 03:25:35.898394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:35.898405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:35.898416 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898426 | orchestrator | 2026-02-14 03:25:35.898437 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-14 03:25:35.898448 | orchestrator | Saturday 14 February 2026 03:25:31 +0000 (0:00:00.176) 0:00:41.592 ***** 2026-02-14 03:25:35.898467 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898478 | orchestrator | 2026-02-14 03:25:35.898489 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-14 03:25:35.898500 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.356) 0:00:41.949 ***** 2026-02-14 03:25:35.898511 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898521 | orchestrator | 2026-02-14 03:25:35.898532 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-14 03:25:35.898543 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.159) 0:00:42.109 ***** 2026-02-14 03:25:35.898553 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.898564 | orchestrator | 2026-02-14 03:25:35.898575 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-14 03:25:35.898586 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.163) 0:00:42.272 ***** 2026-02-14 03:25:35.898596 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:25:35.898607 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-14 03:25:35.898618 | orchestrator | } 2026-02-14 03:25:35.898629 | orchestrator | 2026-02-14 03:25:35.898640 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-14 03:25:35.898652 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.151) 0:00:42.423 ***** 2026-02-14 03:25:35.898662 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:25:35.898673 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-14 03:25:35.898696 | orchestrator | } 2026-02-14 03:25:35.898707 | orchestrator | 2026-02-14 03:25:35.898718 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-14 03:25:35.898728 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.149) 0:00:42.573 ***** 2026-02-14 03:25:35.898739 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:25:35.898750 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-14 03:25:35.898762 | orchestrator | } 2026-02-14 03:25:35.898796 | orchestrator | 2026-02-14 03:25:35.898808 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-14 03:25:35.898818 | orchestrator | Saturday 14 February 2026 03:25:32 +0000 (0:00:00.137) 0:00:42.711 ***** 2026-02-14 03:25:35.898829 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:35.898840 | orchestrator | 2026-02-14 03:25:35.898850 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-14 03:25:35.898861 | orchestrator | Saturday 14 February 2026 03:25:33 +0000 (0:00:00.522) 0:00:43.234 ***** 2026-02-14 03:25:35.898872 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:35.898883 | orchestrator | 2026-02-14 03:25:35.898893 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-14 03:25:35.898904 | orchestrator | Saturday 14 February 2026 03:25:34 +0000 (0:00:00.531) 0:00:43.765 ***** 2026-02-14 03:25:35.898915 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:35.898925 | orchestrator | 2026-02-14 03:25:35.898936 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-14 03:25:35.898947 | orchestrator | Saturday 14 February 2026 03:25:34 +0000 (0:00:00.506) 0:00:44.272 ***** 2026-02-14 03:25:35.898958 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:35.898969 | orchestrator | 2026-02-14 03:25:35.898979 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-14 03:25:35.898990 | orchestrator | Saturday 14 February 2026 03:25:34 +0000 (0:00:00.166) 0:00:44.438 ***** 2026-02-14 03:25:35.899001 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899012 | orchestrator | 2026-02-14 03:25:35.899023 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-14 03:25:35.899033 | orchestrator | Saturday 14 February 2026 03:25:34 +0000 (0:00:00.124) 0:00:44.563 ***** 2026-02-14 03:25:35.899044 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899055 | orchestrator | 2026-02-14 03:25:35.899066 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-14 03:25:35.899077 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.328) 0:00:44.892 ***** 2026-02-14 03:25:35.899087 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:25:35.899098 | orchestrator |  "vgs_report": { 2026-02-14 03:25:35.899110 | orchestrator |  "vg": [] 2026-02-14 03:25:35.899121 | orchestrator |  } 2026-02-14 03:25:35.899132 | orchestrator | } 2026-02-14 03:25:35.899143 | orchestrator | 2026-02-14 03:25:35.899154 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-14 03:25:35.899165 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.157) 0:00:45.049 ***** 2026-02-14 03:25:35.899175 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899186 | orchestrator | 2026-02-14 03:25:35.899197 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-14 03:25:35.899207 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.146) 0:00:45.195 ***** 2026-02-14 03:25:35.899218 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899229 | orchestrator | 2026-02-14 03:25:35.899239 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-14 03:25:35.899250 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.143) 0:00:45.339 ***** 2026-02-14 03:25:35.899261 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899271 | orchestrator | 2026-02-14 03:25:35.899282 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-14 03:25:35.899293 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.136) 0:00:45.475 ***** 2026-02-14 03:25:35.899312 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:35.899323 | orchestrator | 2026-02-14 03:25:35.899340 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-14 03:25:40.816186 | orchestrator | Saturday 14 February 2026 03:25:35 +0000 (0:00:00.135) 0:00:45.611 ***** 2026-02-14 03:25:40.816289 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816301 | orchestrator | 2026-02-14 03:25:40.816311 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-14 03:25:40.816321 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.142) 0:00:45.753 ***** 2026-02-14 03:25:40.816330 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816339 | orchestrator | 2026-02-14 03:25:40.816348 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-14 03:25:40.816357 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.157) 0:00:45.911 ***** 2026-02-14 03:25:40.816365 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816374 | orchestrator | 2026-02-14 03:25:40.816397 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-14 03:25:40.816406 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.153) 0:00:46.064 ***** 2026-02-14 03:25:40.816415 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816424 | orchestrator | 2026-02-14 03:25:40.816433 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-14 03:25:40.816442 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.151) 0:00:46.216 ***** 2026-02-14 03:25:40.816451 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816459 | orchestrator | 2026-02-14 03:25:40.816468 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-14 03:25:40.816477 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.140) 0:00:46.357 ***** 2026-02-14 03:25:40.816486 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816495 | orchestrator | 2026-02-14 03:25:40.816503 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-14 03:25:40.816513 | orchestrator | Saturday 14 February 2026 03:25:36 +0000 (0:00:00.356) 0:00:46.713 ***** 2026-02-14 03:25:40.816522 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816531 | orchestrator | 2026-02-14 03:25:40.816540 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-14 03:25:40.816548 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.151) 0:00:46.865 ***** 2026-02-14 03:25:40.816557 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816566 | orchestrator | 2026-02-14 03:25:40.816575 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-14 03:25:40.816584 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.129) 0:00:46.995 ***** 2026-02-14 03:25:40.816592 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816601 | orchestrator | 2026-02-14 03:25:40.816610 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-14 03:25:40.816619 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.143) 0:00:47.138 ***** 2026-02-14 03:25:40.816628 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816636 | orchestrator | 2026-02-14 03:25:40.816645 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-14 03:25:40.816654 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.135) 0:00:47.274 ***** 2026-02-14 03:25:40.816664 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.816674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.816683 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816692 | orchestrator | 2026-02-14 03:25:40.816701 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-14 03:25:40.816728 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.169) 0:00:47.444 ***** 2026-02-14 03:25:40.816738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.816778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.816789 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816800 | orchestrator | 2026-02-14 03:25:40.816809 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-14 03:25:40.816819 | orchestrator | Saturday 14 February 2026 03:25:37 +0000 (0:00:00.169) 0:00:47.613 ***** 2026-02-14 03:25:40.816829 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.816839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.816849 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816860 | orchestrator | 2026-02-14 03:25:40.816870 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-14 03:25:40.816880 | orchestrator | Saturday 14 February 2026 03:25:38 +0000 (0:00:00.160) 0:00:47.773 ***** 2026-02-14 03:25:40.816890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.816900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.816910 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816920 | orchestrator | 2026-02-14 03:25:40.816945 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-14 03:25:40.816955 | orchestrator | Saturday 14 February 2026 03:25:38 +0000 (0:00:00.152) 0:00:47.925 ***** 2026-02-14 03:25:40.816965 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.816976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.816986 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.816995 | orchestrator | 2026-02-14 03:25:40.817010 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-14 03:25:40.817020 | orchestrator | Saturday 14 February 2026 03:25:38 +0000 (0:00:00.161) 0:00:48.087 ***** 2026-02-14 03:25:40.817029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.817039 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.817050 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.817059 | orchestrator | 2026-02-14 03:25:40.817069 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-14 03:25:40.817079 | orchestrator | Saturday 14 February 2026 03:25:38 +0000 (0:00:00.175) 0:00:48.262 ***** 2026-02-14 03:25:40.817090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.817100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.817110 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.817126 | orchestrator | 2026-02-14 03:25:40.817135 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-14 03:25:40.817143 | orchestrator | Saturday 14 February 2026 03:25:38 +0000 (0:00:00.368) 0:00:48.631 ***** 2026-02-14 03:25:40.817152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.817161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.817170 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.817178 | orchestrator | 2026-02-14 03:25:40.817187 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-14 03:25:40.817196 | orchestrator | Saturday 14 February 2026 03:25:39 +0000 (0:00:00.171) 0:00:48.803 ***** 2026-02-14 03:25:40.817205 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:40.817213 | orchestrator | 2026-02-14 03:25:40.817222 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-14 03:25:40.817231 | orchestrator | Saturday 14 February 2026 03:25:39 +0000 (0:00:00.519) 0:00:49.322 ***** 2026-02-14 03:25:40.817240 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:40.817248 | orchestrator | 2026-02-14 03:25:40.817257 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-14 03:25:40.817266 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.519) 0:00:49.842 ***** 2026-02-14 03:25:40.817274 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:25:40.817283 | orchestrator | 2026-02-14 03:25:40.817292 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-14 03:25:40.817301 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.149) 0:00:49.991 ***** 2026-02-14 03:25:40.817309 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'vg_name': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}) 2026-02-14 03:25:40.817319 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'vg_name': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}) 2026-02-14 03:25:40.817328 | orchestrator | 2026-02-14 03:25:40.817337 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-14 03:25:40.817345 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.192) 0:00:50.183 ***** 2026-02-14 03:25:40.817354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.817363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:40.817372 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:40.817380 | orchestrator | 2026-02-14 03:25:40.817389 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-14 03:25:40.817398 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.167) 0:00:50.350 ***** 2026-02-14 03:25:40.817407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:40.817421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:47.470691 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:47.470869 | orchestrator | 2026-02-14 03:25:47.470888 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-14 03:25:47.470901 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.175) 0:00:50.526 ***** 2026-02-14 03:25:47.470913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 03:25:47.470962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 03:25:47.470975 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:25:47.470986 | orchestrator | 2026-02-14 03:25:47.470997 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-14 03:25:47.471008 | orchestrator | Saturday 14 February 2026 03:25:40 +0000 (0:00:00.172) 0:00:50.698 ***** 2026-02-14 03:25:47.471019 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:25:47.471030 | orchestrator |  "lvm_report": { 2026-02-14 03:25:47.471043 | orchestrator |  "lv": [ 2026-02-14 03:25:47.471054 | orchestrator |  { 2026-02-14 03:25:47.471066 | orchestrator |  "lv_name": "osd-block-7b577363-2bac-543e-944e-5354861b1af5", 2026-02-14 03:25:47.471077 | orchestrator |  "vg_name": "ceph-7b577363-2bac-543e-944e-5354861b1af5" 2026-02-14 03:25:47.471088 | orchestrator |  }, 2026-02-14 03:25:47.471099 | orchestrator |  { 2026-02-14 03:25:47.471110 | orchestrator |  "lv_name": "osd-block-df737486-1b51-5b4a-92b8-76d7a8957091", 2026-02-14 03:25:47.471121 | orchestrator |  "vg_name": "ceph-df737486-1b51-5b4a-92b8-76d7a8957091" 2026-02-14 03:25:47.471131 | orchestrator |  } 2026-02-14 03:25:47.471142 | orchestrator |  ], 2026-02-14 03:25:47.471153 | orchestrator |  "pv": [ 2026-02-14 03:25:47.471169 | orchestrator |  { 2026-02-14 03:25:47.471187 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-14 03:25:47.471206 | orchestrator |  "vg_name": "ceph-7b577363-2bac-543e-944e-5354861b1af5" 2026-02-14 03:25:47.471225 | orchestrator |  }, 2026-02-14 03:25:47.471244 | orchestrator |  { 2026-02-14 03:25:47.471263 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-14 03:25:47.471281 | orchestrator |  "vg_name": "ceph-df737486-1b51-5b4a-92b8-76d7a8957091" 2026-02-14 03:25:47.471300 | orchestrator |  } 2026-02-14 03:25:47.471317 | orchestrator |  ] 2026-02-14 03:25:47.471334 | orchestrator |  } 2026-02-14 03:25:47.471352 | orchestrator | } 2026-02-14 03:25:47.471370 | orchestrator | 2026-02-14 03:25:47.471388 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-14 03:25:47.471406 | orchestrator | 2026-02-14 03:25:47.471423 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 03:25:47.471441 | orchestrator | Saturday 14 February 2026 03:25:41 +0000 (0:00:00.299) 0:00:50.998 ***** 2026-02-14 03:25:47.471458 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-14 03:25:47.471475 | orchestrator | 2026-02-14 03:25:47.471494 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-14 03:25:47.471511 | orchestrator | Saturday 14 February 2026 03:25:41 +0000 (0:00:00.696) 0:00:51.695 ***** 2026-02-14 03:25:47.471528 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:25:47.471546 | orchestrator | 2026-02-14 03:25:47.471566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.471585 | orchestrator | Saturday 14 February 2026 03:25:42 +0000 (0:00:00.252) 0:00:51.948 ***** 2026-02-14 03:25:47.471604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-14 03:25:47.471620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-14 03:25:47.471638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-14 03:25:47.471656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-14 03:25:47.471674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-14 03:25:47.471693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-14 03:25:47.471739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-14 03:25:47.471778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-14 03:25:47.471796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-14 03:25:47.471816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-14 03:25:47.471827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-14 03:25:47.471838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-14 03:25:47.471854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-14 03:25:47.471872 | orchestrator | 2026-02-14 03:25:47.471890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.471907 | orchestrator | Saturday 14 February 2026 03:25:42 +0000 (0:00:00.455) 0:00:52.403 ***** 2026-02-14 03:25:47.471926 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.471945 | orchestrator | 2026-02-14 03:25:47.471964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.471983 | orchestrator | Saturday 14 February 2026 03:25:42 +0000 (0:00:00.221) 0:00:52.624 ***** 2026-02-14 03:25:47.472001 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472017 | orchestrator | 2026-02-14 03:25:47.472028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472060 | orchestrator | Saturday 14 February 2026 03:25:43 +0000 (0:00:00.212) 0:00:52.837 ***** 2026-02-14 03:25:47.472072 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472083 | orchestrator | 2026-02-14 03:25:47.472094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472105 | orchestrator | Saturday 14 February 2026 03:25:43 +0000 (0:00:00.200) 0:00:53.037 ***** 2026-02-14 03:25:47.472115 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472126 | orchestrator | 2026-02-14 03:25:47.472137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472148 | orchestrator | Saturday 14 February 2026 03:25:43 +0000 (0:00:00.239) 0:00:53.277 ***** 2026-02-14 03:25:47.472160 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472170 | orchestrator | 2026-02-14 03:25:47.472181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472192 | orchestrator | Saturday 14 February 2026 03:25:43 +0000 (0:00:00.206) 0:00:53.483 ***** 2026-02-14 03:25:47.472203 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472214 | orchestrator | 2026-02-14 03:25:47.472224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472235 | orchestrator | Saturday 14 February 2026 03:25:43 +0000 (0:00:00.228) 0:00:53.712 ***** 2026-02-14 03:25:47.472246 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472257 | orchestrator | 2026-02-14 03:25:47.472268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472279 | orchestrator | Saturday 14 February 2026 03:25:44 +0000 (0:00:00.212) 0:00:53.924 ***** 2026-02-14 03:25:47.472289 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:47.472300 | orchestrator | 2026-02-14 03:25:47.472311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472322 | orchestrator | Saturday 14 February 2026 03:25:44 +0000 (0:00:00.645) 0:00:54.569 ***** 2026-02-14 03:25:47.472333 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397) 2026-02-14 03:25:47.472345 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397) 2026-02-14 03:25:47.472356 | orchestrator | 2026-02-14 03:25:47.472366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472377 | orchestrator | Saturday 14 February 2026 03:25:45 +0000 (0:00:00.468) 0:00:55.038 ***** 2026-02-14 03:25:47.472491 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48) 2026-02-14 03:25:47.472541 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48) 2026-02-14 03:25:47.472557 | orchestrator | 2026-02-14 03:25:47.472575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472592 | orchestrator | Saturday 14 February 2026 03:25:45 +0000 (0:00:00.434) 0:00:55.472 ***** 2026-02-14 03:25:47.472609 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40) 2026-02-14 03:25:47.472628 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40) 2026-02-14 03:25:47.472645 | orchestrator | 2026-02-14 03:25:47.472664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472682 | orchestrator | Saturday 14 February 2026 03:25:46 +0000 (0:00:00.453) 0:00:55.926 ***** 2026-02-14 03:25:47.472700 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67) 2026-02-14 03:25:47.472746 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67) 2026-02-14 03:25:47.472766 | orchestrator | 2026-02-14 03:25:47.472785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-14 03:25:47.472803 | orchestrator | Saturday 14 February 2026 03:25:46 +0000 (0:00:00.434) 0:00:56.360 ***** 2026-02-14 03:25:47.472823 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-14 03:25:47.472842 | orchestrator | 2026-02-14 03:25:47.472860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:47.472876 | orchestrator | Saturday 14 February 2026 03:25:47 +0000 (0:00:00.382) 0:00:56.742 ***** 2026-02-14 03:25:47.472887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-14 03:25:47.472898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-14 03:25:47.472908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-14 03:25:47.472919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-14 03:25:47.472929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-14 03:25:47.472940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-14 03:25:47.472950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-14 03:25:47.472961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-14 03:25:47.472972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-14 03:25:47.472982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-14 03:25:47.472993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-14 03:25:47.473019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-14 03:25:56.476894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-14 03:25:56.477011 | orchestrator | 2026-02-14 03:25:56.477030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477043 | orchestrator | Saturday 14 February 2026 03:25:47 +0000 (0:00:00.436) 0:00:57.179 ***** 2026-02-14 03:25:56.477055 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477067 | orchestrator | 2026-02-14 03:25:56.477078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477107 | orchestrator | Saturday 14 February 2026 03:25:47 +0000 (0:00:00.197) 0:00:57.376 ***** 2026-02-14 03:25:56.477118 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477150 | orchestrator | 2026-02-14 03:25:56.477162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477172 | orchestrator | Saturday 14 February 2026 03:25:47 +0000 (0:00:00.217) 0:00:57.594 ***** 2026-02-14 03:25:56.477183 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477194 | orchestrator | 2026-02-14 03:25:56.477206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477216 | orchestrator | Saturday 14 February 2026 03:25:48 +0000 (0:00:00.213) 0:00:57.807 ***** 2026-02-14 03:25:56.477227 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477238 | orchestrator | 2026-02-14 03:25:56.477249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477259 | orchestrator | Saturday 14 February 2026 03:25:48 +0000 (0:00:00.217) 0:00:58.024 ***** 2026-02-14 03:25:56.477270 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477281 | orchestrator | 2026-02-14 03:25:56.477292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477302 | orchestrator | Saturday 14 February 2026 03:25:48 +0000 (0:00:00.661) 0:00:58.686 ***** 2026-02-14 03:25:56.477313 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477324 | orchestrator | 2026-02-14 03:25:56.477334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477345 | orchestrator | Saturday 14 February 2026 03:25:49 +0000 (0:00:00.216) 0:00:58.903 ***** 2026-02-14 03:25:56.477356 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477367 | orchestrator | 2026-02-14 03:25:56.477377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477389 | orchestrator | Saturday 14 February 2026 03:25:49 +0000 (0:00:00.210) 0:00:59.113 ***** 2026-02-14 03:25:56.477399 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477410 | orchestrator | 2026-02-14 03:25:56.477422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477435 | orchestrator | Saturday 14 February 2026 03:25:49 +0000 (0:00:00.208) 0:00:59.322 ***** 2026-02-14 03:25:56.477448 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-14 03:25:56.477461 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-14 03:25:56.477473 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-14 03:25:56.477486 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-14 03:25:56.477498 | orchestrator | 2026-02-14 03:25:56.477510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477539 | orchestrator | Saturday 14 February 2026 03:25:50 +0000 (0:00:00.658) 0:00:59.981 ***** 2026-02-14 03:25:56.477563 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477576 | orchestrator | 2026-02-14 03:25:56.477588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477600 | orchestrator | Saturday 14 February 2026 03:25:50 +0000 (0:00:00.212) 0:01:00.193 ***** 2026-02-14 03:25:56.477612 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477624 | orchestrator | 2026-02-14 03:25:56.477637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477650 | orchestrator | Saturday 14 February 2026 03:25:50 +0000 (0:00:00.206) 0:01:00.399 ***** 2026-02-14 03:25:56.477663 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477699 | orchestrator | 2026-02-14 03:25:56.477713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-14 03:25:56.477725 | orchestrator | Saturday 14 February 2026 03:25:50 +0000 (0:00:00.211) 0:01:00.611 ***** 2026-02-14 03:25:56.477738 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477750 | orchestrator | 2026-02-14 03:25:56.477763 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-14 03:25:56.477776 | orchestrator | Saturday 14 February 2026 03:25:51 +0000 (0:00:00.204) 0:01:00.815 ***** 2026-02-14 03:25:56.477787 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.477798 | orchestrator | 2026-02-14 03:25:56.477817 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-14 03:25:56.477828 | orchestrator | Saturday 14 February 2026 03:25:51 +0000 (0:00:00.151) 0:01:00.967 ***** 2026-02-14 03:25:56.477840 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1745485d-ab31-507e-930d-8d3ce82a0691'}}) 2026-02-14 03:25:56.477852 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7da5590-35e5-5703-96c8-37fe127c27f7'}}) 2026-02-14 03:25:56.477863 | orchestrator | 2026-02-14 03:25:56.477874 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-14 03:25:56.477885 | orchestrator | Saturday 14 February 2026 03:25:51 +0000 (0:00:00.184) 0:01:01.152 ***** 2026-02-14 03:25:56.477897 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}) 2026-02-14 03:25:56.477910 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}) 2026-02-14 03:25:56.477921 | orchestrator | 2026-02-14 03:25:56.477932 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-14 03:25:56.477961 | orchestrator | Saturday 14 February 2026 03:25:53 +0000 (0:00:01.820) 0:01:02.973 ***** 2026-02-14 03:25:56.477972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:25:56.477984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:25:56.477995 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478006 | orchestrator | 2026-02-14 03:25:56.478084 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-14 03:25:56.478101 | orchestrator | Saturday 14 February 2026 03:25:53 +0000 (0:00:00.405) 0:01:03.379 ***** 2026-02-14 03:25:56.478112 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}) 2026-02-14 03:25:56.478123 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}) 2026-02-14 03:25:56.478134 | orchestrator | 2026-02-14 03:25:56.478145 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-14 03:25:56.478155 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:01.358) 0:01:04.737 ***** 2026-02-14 03:25:56.478166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:25:56.478177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:25:56.478188 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478198 | orchestrator | 2026-02-14 03:25:56.478209 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-14 03:25:56.478220 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.168) 0:01:04.906 ***** 2026-02-14 03:25:56.478231 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478241 | orchestrator | 2026-02-14 03:25:56.478252 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-14 03:25:56.478263 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.156) 0:01:05.062 ***** 2026-02-14 03:25:56.478273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:25:56.478284 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:25:56.478302 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478313 | orchestrator | 2026-02-14 03:25:56.478324 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-14 03:25:56.478335 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.168) 0:01:05.230 ***** 2026-02-14 03:25:56.478346 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478357 | orchestrator | 2026-02-14 03:25:56.478367 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-14 03:25:56.478378 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.154) 0:01:05.385 ***** 2026-02-14 03:25:56.478389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:25:56.478400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:25:56.478411 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478421 | orchestrator | 2026-02-14 03:25:56.478432 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-14 03:25:56.478443 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.158) 0:01:05.544 ***** 2026-02-14 03:25:56.478454 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478465 | orchestrator | 2026-02-14 03:25:56.478475 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-14 03:25:56.478486 | orchestrator | Saturday 14 February 2026 03:25:55 +0000 (0:00:00.161) 0:01:05.705 ***** 2026-02-14 03:25:56.478497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:25:56.478508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:25:56.478518 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:25:56.478529 | orchestrator | 2026-02-14 03:25:56.478540 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-14 03:25:56.478551 | orchestrator | Saturday 14 February 2026 03:25:56 +0000 (0:00:00.177) 0:01:05.883 ***** 2026-02-14 03:25:56.478562 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:25:56.478573 | orchestrator | 2026-02-14 03:25:56.478584 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-14 03:25:56.478594 | orchestrator | Saturday 14 February 2026 03:25:56 +0000 (0:00:00.145) 0:01:06.028 ***** 2026-02-14 03:25:56.478614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:02.991979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:02.992089 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992107 | orchestrator | 2026-02-14 03:26:02.992120 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-14 03:26:02.992132 | orchestrator | Saturday 14 February 2026 03:25:56 +0000 (0:00:00.165) 0:01:06.194 ***** 2026-02-14 03:26:02.992160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:02.992173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:02.992184 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992196 | orchestrator | 2026-02-14 03:26:02.992207 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-14 03:26:02.992219 | orchestrator | Saturday 14 February 2026 03:25:56 +0000 (0:00:00.157) 0:01:06.351 ***** 2026-02-14 03:26:02.992249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:02.992261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:02.992272 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992283 | orchestrator | 2026-02-14 03:26:02.992295 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-14 03:26:02.992306 | orchestrator | Saturday 14 February 2026 03:25:56 +0000 (0:00:00.368) 0:01:06.720 ***** 2026-02-14 03:26:02.992317 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992328 | orchestrator | 2026-02-14 03:26:02.992339 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-14 03:26:02.992350 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.136) 0:01:06.857 ***** 2026-02-14 03:26:02.992361 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992373 | orchestrator | 2026-02-14 03:26:02.992385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-14 03:26:02.992396 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.147) 0:01:07.004 ***** 2026-02-14 03:26:02.992407 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992418 | orchestrator | 2026-02-14 03:26:02.992429 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-14 03:26:02.992440 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.143) 0:01:07.147 ***** 2026-02-14 03:26:02.992452 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:26:02.992463 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-14 03:26:02.992475 | orchestrator | } 2026-02-14 03:26:02.992486 | orchestrator | 2026-02-14 03:26:02.992497 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-14 03:26:02.992510 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.155) 0:01:07.303 ***** 2026-02-14 03:26:02.992524 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:26:02.992537 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-14 03:26:02.992549 | orchestrator | } 2026-02-14 03:26:02.992562 | orchestrator | 2026-02-14 03:26:02.992574 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-14 03:26:02.992587 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.138) 0:01:07.441 ***** 2026-02-14 03:26:02.992600 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:26:02.992613 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-14 03:26:02.992626 | orchestrator | } 2026-02-14 03:26:02.992667 | orchestrator | 2026-02-14 03:26:02.992687 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-14 03:26:02.992708 | orchestrator | Saturday 14 February 2026 03:25:57 +0000 (0:00:00.151) 0:01:07.593 ***** 2026-02-14 03:26:02.992728 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:02.992747 | orchestrator | 2026-02-14 03:26:02.992767 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-14 03:26:02.992781 | orchestrator | Saturday 14 February 2026 03:25:58 +0000 (0:00:00.567) 0:01:08.160 ***** 2026-02-14 03:26:02.992793 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:02.992804 | orchestrator | 2026-02-14 03:26:02.992815 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-14 03:26:02.992826 | orchestrator | Saturday 14 February 2026 03:25:58 +0000 (0:00:00.514) 0:01:08.675 ***** 2026-02-14 03:26:02.992837 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:02.992847 | orchestrator | 2026-02-14 03:26:02.992858 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-14 03:26:02.992869 | orchestrator | Saturday 14 February 2026 03:25:59 +0000 (0:00:00.522) 0:01:09.197 ***** 2026-02-14 03:26:02.992880 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:02.992891 | orchestrator | 2026-02-14 03:26:02.992902 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-14 03:26:02.992921 | orchestrator | Saturday 14 February 2026 03:25:59 +0000 (0:00:00.153) 0:01:09.351 ***** 2026-02-14 03:26:02.992932 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992943 | orchestrator | 2026-02-14 03:26:02.992954 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-14 03:26:02.992965 | orchestrator | Saturday 14 February 2026 03:25:59 +0000 (0:00:00.104) 0:01:09.455 ***** 2026-02-14 03:26:02.992976 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.992987 | orchestrator | 2026-02-14 03:26:02.992998 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-14 03:26:02.993009 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.325) 0:01:09.780 ***** 2026-02-14 03:26:02.993020 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:26:02.993031 | orchestrator |  "vgs_report": { 2026-02-14 03:26:02.993043 | orchestrator |  "vg": [] 2026-02-14 03:26:02.993073 | orchestrator |  } 2026-02-14 03:26:02.993085 | orchestrator | } 2026-02-14 03:26:02.993096 | orchestrator | 2026-02-14 03:26:02.993107 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-14 03:26:02.993118 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.159) 0:01:09.940 ***** 2026-02-14 03:26:02.993129 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993140 | orchestrator | 2026-02-14 03:26:02.993151 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-14 03:26:02.993162 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.143) 0:01:10.084 ***** 2026-02-14 03:26:02.993179 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993190 | orchestrator | 2026-02-14 03:26:02.993201 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-14 03:26:02.993212 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.152) 0:01:10.236 ***** 2026-02-14 03:26:02.993223 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993234 | orchestrator | 2026-02-14 03:26:02.993245 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-14 03:26:02.993256 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.135) 0:01:10.371 ***** 2026-02-14 03:26:02.993267 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993278 | orchestrator | 2026-02-14 03:26:02.993288 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-14 03:26:02.993300 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.156) 0:01:10.528 ***** 2026-02-14 03:26:02.993310 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993321 | orchestrator | 2026-02-14 03:26:02.993332 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-14 03:26:02.993343 | orchestrator | Saturday 14 February 2026 03:26:00 +0000 (0:00:00.147) 0:01:10.675 ***** 2026-02-14 03:26:02.993354 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993365 | orchestrator | 2026-02-14 03:26:02.993376 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-14 03:26:02.993386 | orchestrator | Saturday 14 February 2026 03:26:01 +0000 (0:00:00.146) 0:01:10.822 ***** 2026-02-14 03:26:02.993397 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993408 | orchestrator | 2026-02-14 03:26:02.993419 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-14 03:26:02.993430 | orchestrator | Saturday 14 February 2026 03:26:01 +0000 (0:00:00.141) 0:01:10.964 ***** 2026-02-14 03:26:02.993441 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993452 | orchestrator | 2026-02-14 03:26:02.993463 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-14 03:26:02.993474 | orchestrator | Saturday 14 February 2026 03:26:01 +0000 (0:00:00.147) 0:01:11.112 ***** 2026-02-14 03:26:02.993485 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993496 | orchestrator | 2026-02-14 03:26:02.993507 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-14 03:26:02.993518 | orchestrator | Saturday 14 February 2026 03:26:01 +0000 (0:00:00.145) 0:01:11.258 ***** 2026-02-14 03:26:02.993535 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993546 | orchestrator | 2026-02-14 03:26:02.993557 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-14 03:26:02.993568 | orchestrator | Saturday 14 February 2026 03:26:01 +0000 (0:00:00.137) 0:01:11.396 ***** 2026-02-14 03:26:02.993579 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993590 | orchestrator | 2026-02-14 03:26:02.993601 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-14 03:26:02.993612 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.353) 0:01:11.750 ***** 2026-02-14 03:26:02.993623 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993634 | orchestrator | 2026-02-14 03:26:02.993684 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-14 03:26:02.993696 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.149) 0:01:11.899 ***** 2026-02-14 03:26:02.993707 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993718 | orchestrator | 2026-02-14 03:26:02.993729 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-14 03:26:02.993740 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.151) 0:01:12.051 ***** 2026-02-14 03:26:02.993751 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993762 | orchestrator | 2026-02-14 03:26:02.993773 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-14 03:26:02.993784 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.157) 0:01:12.209 ***** 2026-02-14 03:26:02.993795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:02.993806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:02.993817 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993828 | orchestrator | 2026-02-14 03:26:02.993839 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-14 03:26:02.993850 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.172) 0:01:12.381 ***** 2026-02-14 03:26:02.993861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:02.993872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:02.993883 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:02.993894 | orchestrator | 2026-02-14 03:26:02.993905 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-14 03:26:02.993916 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.161) 0:01:12.542 ***** 2026-02-14 03:26:02.993935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111057 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111163 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111179 | orchestrator | 2026-02-14 03:26:06.111209 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-14 03:26:06.111223 | orchestrator | Saturday 14 February 2026 03:26:02 +0000 (0:00:00.167) 0:01:12.710 ***** 2026-02-14 03:26:06.111234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111246 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111278 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111290 | orchestrator | 2026-02-14 03:26:06.111301 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-14 03:26:06.111313 | orchestrator | Saturday 14 February 2026 03:26:03 +0000 (0:00:00.161) 0:01:12.871 ***** 2026-02-14 03:26:06.111324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111335 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111347 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111358 | orchestrator | 2026-02-14 03:26:06.111369 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-14 03:26:06.111380 | orchestrator | Saturday 14 February 2026 03:26:03 +0000 (0:00:00.163) 0:01:13.035 ***** 2026-02-14 03:26:06.111391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111413 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111424 | orchestrator | 2026-02-14 03:26:06.111435 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-14 03:26:06.111446 | orchestrator | Saturday 14 February 2026 03:26:03 +0000 (0:00:00.155) 0:01:13.191 ***** 2026-02-14 03:26:06.111457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111479 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111490 | orchestrator | 2026-02-14 03:26:06.111501 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-14 03:26:06.111512 | orchestrator | Saturday 14 February 2026 03:26:03 +0000 (0:00:00.166) 0:01:13.357 ***** 2026-02-14 03:26:06.111523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111545 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111556 | orchestrator | 2026-02-14 03:26:06.111569 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-14 03:26:06.111581 | orchestrator | Saturday 14 February 2026 03:26:03 +0000 (0:00:00.172) 0:01:13.529 ***** 2026-02-14 03:26:06.111595 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:06.111608 | orchestrator | 2026-02-14 03:26:06.111621 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-14 03:26:06.111665 | orchestrator | Saturday 14 February 2026 03:26:04 +0000 (0:00:00.768) 0:01:14.297 ***** 2026-02-14 03:26:06.111680 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:06.111694 | orchestrator | 2026-02-14 03:26:06.111706 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-14 03:26:06.111720 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.536) 0:01:14.834 ***** 2026-02-14 03:26:06.111732 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:06.111745 | orchestrator | 2026-02-14 03:26:06.111758 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-14 03:26:06.111770 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.160) 0:01:14.995 ***** 2026-02-14 03:26:06.111790 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'vg_name': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}) 2026-02-14 03:26:06.111805 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'vg_name': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}) 2026-02-14 03:26:06.111817 | orchestrator | 2026-02-14 03:26:06.111830 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-14 03:26:06.111843 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.179) 0:01:15.174 ***** 2026-02-14 03:26:06.111874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111907 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111920 | orchestrator | 2026-02-14 03:26:06.111933 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-14 03:26:06.111945 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.167) 0:01:15.341 ***** 2026-02-14 03:26:06.111956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.111967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.111978 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.111989 | orchestrator | 2026-02-14 03:26:06.112000 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-14 03:26:06.112010 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.158) 0:01:15.500 ***** 2026-02-14 03:26:06.112021 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 03:26:06.112032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 03:26:06.112043 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:06.112054 | orchestrator | 2026-02-14 03:26:06.112065 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-14 03:26:06.112076 | orchestrator | Saturday 14 February 2026 03:26:05 +0000 (0:00:00.166) 0:01:15.666 ***** 2026-02-14 03:26:06.112087 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:26:06.112098 | orchestrator |  "lvm_report": { 2026-02-14 03:26:06.112109 | orchestrator |  "lv": [ 2026-02-14 03:26:06.112121 | orchestrator |  { 2026-02-14 03:26:06.112132 | orchestrator |  "lv_name": "osd-block-1745485d-ab31-507e-930d-8d3ce82a0691", 2026-02-14 03:26:06.112144 | orchestrator |  "vg_name": "ceph-1745485d-ab31-507e-930d-8d3ce82a0691" 2026-02-14 03:26:06.112154 | orchestrator |  }, 2026-02-14 03:26:06.112165 | orchestrator |  { 2026-02-14 03:26:06.112176 | orchestrator |  "lv_name": "osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7", 2026-02-14 03:26:06.112187 | orchestrator |  "vg_name": "ceph-f7da5590-35e5-5703-96c8-37fe127c27f7" 2026-02-14 03:26:06.112198 | orchestrator |  } 2026-02-14 03:26:06.112208 | orchestrator |  ], 2026-02-14 03:26:06.112219 | orchestrator |  "pv": [ 2026-02-14 03:26:06.112230 | orchestrator |  { 2026-02-14 03:26:06.112241 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-14 03:26:06.112252 | orchestrator |  "vg_name": "ceph-1745485d-ab31-507e-930d-8d3ce82a0691" 2026-02-14 03:26:06.112263 | orchestrator |  }, 2026-02-14 03:26:06.112273 | orchestrator |  { 2026-02-14 03:26:06.112284 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-14 03:26:06.112306 | orchestrator |  "vg_name": "ceph-f7da5590-35e5-5703-96c8-37fe127c27f7" 2026-02-14 03:26:06.112317 | orchestrator |  } 2026-02-14 03:26:06.112328 | orchestrator |  ] 2026-02-14 03:26:06.112339 | orchestrator |  } 2026-02-14 03:26:06.112350 | orchestrator | } 2026-02-14 03:26:06.112361 | orchestrator | 2026-02-14 03:26:06.112372 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:26:06.112383 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-14 03:26:06.112394 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-14 03:26:06.112405 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-14 03:26:06.112416 | orchestrator | 2026-02-14 03:26:06.112427 | orchestrator | 2026-02-14 03:26:06.112437 | orchestrator | 2026-02-14 03:26:06.112448 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:26:06.112459 | orchestrator | Saturday 14 February 2026 03:26:06 +0000 (0:00:00.143) 0:01:15.810 ***** 2026-02-14 03:26:06.112470 | orchestrator | =============================================================================== 2026-02-14 03:26:06.112480 | orchestrator | Create block VGs -------------------------------------------------------- 5.61s 2026-02-14 03:26:06.112491 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-02-14 03:26:06.112502 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.90s 2026-02-14 03:26:06.112513 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.79s 2026-02-14 03:26:06.112523 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-02-14 03:26:06.112534 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-02-14 03:26:06.112544 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2026-02-14 03:26:06.112555 | orchestrator | Add known links to the list of available block devices ------------------ 1.45s 2026-02-14 03:26:06.112573 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-02-14 03:26:06.492850 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.21s 2026-02-14 03:26:06.492950 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-02-14 03:26:06.492965 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-02-14 03:26:06.492995 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.78s 2026-02-14 03:26:06.493007 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.77s 2026-02-14 03:26:06.493018 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2026-02-14 03:26:06.493029 | orchestrator | Print LVM report data --------------------------------------------------- 0.74s 2026-02-14 03:26:06.493040 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2026-02-14 03:26:06.493051 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.72s 2026-02-14 03:26:06.493062 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-02-14 03:26:06.493073 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.71s 2026-02-14 03:26:18.875006 | orchestrator | 2026-02-14 03:26:18 | INFO  | Task 72434af0-00c3-4e28-8a3b-1989098991a8 (facts) was prepared for execution. 2026-02-14 03:26:18.875121 | orchestrator | 2026-02-14 03:26:18 | INFO  | It takes a moment until task 72434af0-00c3-4e28-8a3b-1989098991a8 (facts) has been started and output is visible here. 2026-02-14 03:26:31.869360 | orchestrator | 2026-02-14 03:26:31.869445 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-14 03:26:31.869473 | orchestrator | 2026-02-14 03:26:31.869480 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-14 03:26:31.869486 | orchestrator | Saturday 14 February 2026 03:26:23 +0000 (0:00:00.265) 0:00:00.265 ***** 2026-02-14 03:26:31.869491 | orchestrator | ok: [testbed-manager] 2026-02-14 03:26:31.869498 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:31.869503 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:31.869509 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:31.869514 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:31.869520 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:31.869563 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:31.869570 | orchestrator | 2026-02-14 03:26:31.869575 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-14 03:26:31.869580 | orchestrator | Saturday 14 February 2026 03:26:24 +0000 (0:00:01.181) 0:00:01.446 ***** 2026-02-14 03:26:31.869586 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:26:31.869592 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:26:31.869597 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:26:31.869602 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:26:31.869607 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:31.869612 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:26:31.869618 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:31.869623 | orchestrator | 2026-02-14 03:26:31.869628 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 03:26:31.869633 | orchestrator | 2026-02-14 03:26:31.869639 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 03:26:31.869644 | orchestrator | Saturday 14 February 2026 03:26:25 +0000 (0:00:01.319) 0:00:02.766 ***** 2026-02-14 03:26:31.869649 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:31.869654 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:31.869659 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:31.869664 | orchestrator | ok: [testbed-manager] 2026-02-14 03:26:31.869670 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:31.869675 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:31.869680 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:31.869685 | orchestrator | 2026-02-14 03:26:31.869690 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-14 03:26:31.869695 | orchestrator | 2026-02-14 03:26:31.869700 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-14 03:26:31.869705 | orchestrator | Saturday 14 February 2026 03:26:30 +0000 (0:00:05.285) 0:00:08.051 ***** 2026-02-14 03:26:31.869711 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:26:31.869716 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:26:31.869721 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:26:31.869726 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:26:31.869731 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:31.869736 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:26:31.869741 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:31.869746 | orchestrator | 2026-02-14 03:26:31.869751 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:26:31.869757 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869764 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869769 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869774 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869780 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869790 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869795 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:26:31.869800 | orchestrator | 2026-02-14 03:26:31.869805 | orchestrator | 2026-02-14 03:26:31.869810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:26:31.869827 | orchestrator | Saturday 14 February 2026 03:26:31 +0000 (0:00:00.545) 0:00:08.596 ***** 2026-02-14 03:26:31.869833 | orchestrator | =============================================================================== 2026-02-14 03:26:31.869838 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.29s 2026-02-14 03:26:31.869843 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-02-14 03:26:31.869848 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-02-14 03:26:31.869853 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-14 03:26:34.188237 | orchestrator | 2026-02-14 03:26:34 | INFO  | Task 4b97c243-2d22-4e4e-af76-2cd7803affb4 (ceph) was prepared for execution. 2026-02-14 03:26:34.188353 | orchestrator | 2026-02-14 03:26:34 | INFO  | It takes a moment until task 4b97c243-2d22-4e4e-af76-2cd7803affb4 (ceph) has been started and output is visible here. 2026-02-14 03:26:52.515941 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 03:26:52.516058 | orchestrator | 2.16.14 2026-02-14 03:26:52.516077 | orchestrator | 2026-02-14 03:26:52.516089 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-14 03:26:52.516102 | orchestrator | 2026-02-14 03:26:52.516113 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 03:26:52.516125 | orchestrator | Saturday 14 February 2026 03:26:39 +0000 (0:00:00.787) 0:00:00.787 ***** 2026-02-14 03:26:52.516137 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:26:52.516148 | orchestrator | 2026-02-14 03:26:52.516160 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 03:26:52.516170 | orchestrator | Saturday 14 February 2026 03:26:40 +0000 (0:00:01.200) 0:00:01.987 ***** 2026-02-14 03:26:52.516181 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516192 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516203 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516214 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516225 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516239 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516259 | orchestrator | 2026-02-14 03:26:52.516277 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 03:26:52.516295 | orchestrator | Saturday 14 February 2026 03:26:41 +0000 (0:00:01.362) 0:00:03.350 ***** 2026-02-14 03:26:52.516311 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516328 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516345 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516362 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516379 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516397 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516414 | orchestrator | 2026-02-14 03:26:52.516432 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 03:26:52.516479 | orchestrator | Saturday 14 February 2026 03:26:42 +0000 (0:00:00.805) 0:00:04.156 ***** 2026-02-14 03:26:52.516500 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516519 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516533 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516549 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516604 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516625 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516644 | orchestrator | 2026-02-14 03:26:52.516656 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 03:26:52.516669 | orchestrator | Saturday 14 February 2026 03:26:43 +0000 (0:00:00.927) 0:00:05.083 ***** 2026-02-14 03:26:52.516682 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516695 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516706 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516717 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516727 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516738 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516749 | orchestrator | 2026-02-14 03:26:52.516760 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 03:26:52.516771 | orchestrator | Saturday 14 February 2026 03:26:44 +0000 (0:00:00.878) 0:00:05.961 ***** 2026-02-14 03:26:52.516782 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516792 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516803 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516814 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516824 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516835 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516845 | orchestrator | 2026-02-14 03:26:52.516857 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 03:26:52.516876 | orchestrator | Saturday 14 February 2026 03:26:45 +0000 (0:00:00.612) 0:00:06.573 ***** 2026-02-14 03:26:52.516894 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.516912 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.516930 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.516947 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.516963 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.516981 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.516999 | orchestrator | 2026-02-14 03:26:52.517018 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 03:26:52.517038 | orchestrator | Saturday 14 February 2026 03:26:45 +0000 (0:00:00.858) 0:00:07.432 ***** 2026-02-14 03:26:52.517057 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:52.517070 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:26:52.517081 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:26:52.517092 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:26:52.517103 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:26:52.517113 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:26:52.517124 | orchestrator | 2026-02-14 03:26:52.517135 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 03:26:52.517146 | orchestrator | Saturday 14 February 2026 03:26:46 +0000 (0:00:00.610) 0:00:08.042 ***** 2026-02-14 03:26:52.517157 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.517168 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.517178 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.517189 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.517200 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.517227 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.517238 | orchestrator | 2026-02-14 03:26:52.517249 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 03:26:52.517259 | orchestrator | Saturday 14 February 2026 03:26:47 +0000 (0:00:00.756) 0:00:08.799 ***** 2026-02-14 03:26:52.517270 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:26:52.517281 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:26:52.517292 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:26:52.517303 | orchestrator | 2026-02-14 03:26:52.517313 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 03:26:52.517324 | orchestrator | Saturday 14 February 2026 03:26:47 +0000 (0:00:00.635) 0:00:09.434 ***** 2026-02-14 03:26:52.517347 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:26:52.517358 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:26:52.517368 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:26:52.517401 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:26:52.517412 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:26:52.517423 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:26:52.517434 | orchestrator | 2026-02-14 03:26:52.517467 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 03:26:52.517479 | orchestrator | Saturday 14 February 2026 03:26:48 +0000 (0:00:00.733) 0:00:10.167 ***** 2026-02-14 03:26:52.517490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:26:52.517501 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:26:52.517511 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:26:52.517522 | orchestrator | 2026-02-14 03:26:52.517533 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 03:26:52.517544 | orchestrator | Saturday 14 February 2026 03:26:51 +0000 (0:00:02.429) 0:00:12.597 ***** 2026-02-14 03:26:52.517555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 03:26:52.517566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 03:26:52.517577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 03:26:52.517588 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:52.517599 | orchestrator | 2026-02-14 03:26:52.517610 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 03:26:52.517621 | orchestrator | Saturday 14 February 2026 03:26:51 +0000 (0:00:00.418) 0:00:13.015 ***** 2026-02-14 03:26:52.517634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517671 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:52.517682 | orchestrator | 2026-02-14 03:26:52.517693 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 03:26:52.517704 | orchestrator | Saturday 14 February 2026 03:26:52 +0000 (0:00:00.650) 0:00:13.665 ***** 2026-02-14 03:26:52.517717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517742 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:26:52.517761 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:26:52.517772 | orchestrator | 2026-02-14 03:26:52.517789 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 03:26:52.517800 | orchestrator | Saturday 14 February 2026 03:26:52 +0000 (0:00:00.174) 0:00:13.839 ***** 2026-02-14 03:26:52.517821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 03:26:49.536087', 'end': '2026-02-14 03:26:49.584680', 'delta': '0:00:00.048593', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 03:27:02.161494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 03:26:50.108413', 'end': '2026-02-14 03:26:50.152820', 'delta': '0:00:00.044407', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 03:27:02.161601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 03:26:50.669530', 'end': '2026-02-14 03:26:50.714829', 'delta': '0:00:00.045299', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 03:27:02.161616 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.161628 | orchestrator | 2026-02-14 03:27:02.161640 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 03:27:02.161652 | orchestrator | Saturday 14 February 2026 03:26:52 +0000 (0:00:00.198) 0:00:14.038 ***** 2026-02-14 03:27:02.161662 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:02.161672 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:02.161682 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:02.161691 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:02.161701 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:02.161711 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:02.161720 | orchestrator | 2026-02-14 03:27:02.161730 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 03:27:02.161740 | orchestrator | Saturday 14 February 2026 03:26:53 +0000 (0:00:00.716) 0:00:14.755 ***** 2026-02-14 03:27:02.161750 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:27:02.161760 | orchestrator | 2026-02-14 03:27:02.161769 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 03:27:02.161779 | orchestrator | Saturday 14 February 2026 03:26:54 +0000 (0:00:00.880) 0:00:15.635 ***** 2026-02-14 03:27:02.161827 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.161838 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.161847 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.161857 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.161867 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.161876 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.161886 | orchestrator | 2026-02-14 03:27:02.161896 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 03:27:02.161906 | orchestrator | Saturday 14 February 2026 03:26:54 +0000 (0:00:00.837) 0:00:16.472 ***** 2026-02-14 03:27:02.161916 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.161925 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.161935 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.161945 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.161954 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.161964 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.161974 | orchestrator | 2026-02-14 03:27:02.161984 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 03:27:02.161996 | orchestrator | Saturday 14 February 2026 03:26:56 +0000 (0:00:01.152) 0:00:17.625 ***** 2026-02-14 03:27:02.162008 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162076 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162088 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162100 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162111 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162136 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162148 | orchestrator | 2026-02-14 03:27:02.162159 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 03:27:02.162171 | orchestrator | Saturday 14 February 2026 03:26:56 +0000 (0:00:00.593) 0:00:18.218 ***** 2026-02-14 03:27:02.162181 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162193 | orchestrator | 2026-02-14 03:27:02.162204 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 03:27:02.162215 | orchestrator | Saturday 14 February 2026 03:26:56 +0000 (0:00:00.118) 0:00:18.337 ***** 2026-02-14 03:27:02.162226 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162237 | orchestrator | 2026-02-14 03:27:02.162249 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 03:27:02.162260 | orchestrator | Saturday 14 February 2026 03:26:57 +0000 (0:00:00.218) 0:00:18.555 ***** 2026-02-14 03:27:02.162271 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162282 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162294 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162305 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162316 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162328 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162340 | orchestrator | 2026-02-14 03:27:02.162367 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 03:27:02.162377 | orchestrator | Saturday 14 February 2026 03:26:57 +0000 (0:00:00.803) 0:00:19.359 ***** 2026-02-14 03:27:02.162387 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162396 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162406 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162437 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162447 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162457 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162467 | orchestrator | 2026-02-14 03:27:02.162476 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 03:27:02.162486 | orchestrator | Saturday 14 February 2026 03:26:58 +0000 (0:00:00.622) 0:00:19.981 ***** 2026-02-14 03:27:02.162496 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162505 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162526 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162544 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162554 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162563 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162573 | orchestrator | 2026-02-14 03:27:02.162583 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 03:27:02.162592 | orchestrator | Saturday 14 February 2026 03:26:59 +0000 (0:00:00.799) 0:00:20.781 ***** 2026-02-14 03:27:02.162602 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162612 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162621 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162642 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162652 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162661 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162671 | orchestrator | 2026-02-14 03:27:02.162681 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 03:27:02.162691 | orchestrator | Saturday 14 February 2026 03:26:59 +0000 (0:00:00.599) 0:00:21.381 ***** 2026-02-14 03:27:02.162700 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162710 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162720 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162740 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162750 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162760 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162769 | orchestrator | 2026-02-14 03:27:02.162779 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 03:27:02.162789 | orchestrator | Saturday 14 February 2026 03:27:00 +0000 (0:00:00.802) 0:00:22.184 ***** 2026-02-14 03:27:02.162798 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162808 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162817 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162827 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162836 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162846 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162855 | orchestrator | 2026-02-14 03:27:02.162865 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 03:27:02.162876 | orchestrator | Saturday 14 February 2026 03:27:01 +0000 (0:00:00.597) 0:00:22.781 ***** 2026-02-14 03:27:02.162885 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.162895 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.162904 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.162914 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:02.162923 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:02.162933 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:02.162943 | orchestrator | 2026-02-14 03:27:02.162952 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 03:27:02.162962 | orchestrator | Saturday 14 February 2026 03:27:02 +0000 (0:00:00.788) 0:00:23.569 ***** 2026-02-14 03:27:02.162973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.163003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.163029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.281866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.281971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.281987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.281999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.282011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.282082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.282094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.282148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.282188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.282203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.282216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.282241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.282262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411310 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:02.411323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.411570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.411599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.411619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.614164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.614332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.614351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.614638 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:02.614670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.614705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.614736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.767157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.767280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.767333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.767639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:02.767659 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:02.767680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:02.767728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:03.007642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:03.007660 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:03.007677 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:03.007689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.007765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:27:03.225012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:03.225114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:27:03.225132 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:03.225146 | orchestrator | 2026-02-14 03:27:03.225161 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 03:27:03.225175 | orchestrator | Saturday 14 February 2026 03:27:02 +0000 (0:00:00.965) 0:00:24.535 ***** 2026-02-14 03:27:03.225191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225306 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.225382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.569969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572527 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572657 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.572690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725643 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.725801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883499 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:03.883524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883554 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883649 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883669 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883844 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:03.883897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.027973 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028080 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:04.028104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028141 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028175 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028188 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028200 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028254 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028285 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.028308 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.274907 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275033 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:04.275051 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:04.275063 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:04.275076 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275089 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275101 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275112 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275124 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275168 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275193 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:04.275240 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:27:15.796332 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.796505 | orchestrator | 2026-02-14 03:27:15.796522 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 03:27:15.796535 | orchestrator | Saturday 14 February 2026 03:27:04 +0000 (0:00:01.263) 0:00:25.798 ***** 2026-02-14 03:27:15.796547 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:15.796558 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:15.796569 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:15.796580 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:15.796590 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:15.796601 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:15.796612 | orchestrator | 2026-02-14 03:27:15.796623 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 03:27:15.796634 | orchestrator | Saturday 14 February 2026 03:27:05 +0000 (0:00:00.917) 0:00:26.716 ***** 2026-02-14 03:27:15.796645 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:15.796655 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:15.796666 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:15.796677 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:15.796687 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:15.796698 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:15.796708 | orchestrator | 2026-02-14 03:27:15.796719 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 03:27:15.796730 | orchestrator | Saturday 14 February 2026 03:27:05 +0000 (0:00:00.784) 0:00:27.501 ***** 2026-02-14 03:27:15.796741 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.796752 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.796763 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.796774 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.796784 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.796795 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.796806 | orchestrator | 2026-02-14 03:27:15.796817 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 03:27:15.796829 | orchestrator | Saturday 14 February 2026 03:27:06 +0000 (0:00:00.565) 0:00:28.066 ***** 2026-02-14 03:27:15.796840 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.796851 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.796861 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.796872 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.796883 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.796894 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.796904 | orchestrator | 2026-02-14 03:27:15.796915 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 03:27:15.796926 | orchestrator | Saturday 14 February 2026 03:27:07 +0000 (0:00:00.799) 0:00:28.866 ***** 2026-02-14 03:27:15.796936 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.796947 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.796958 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.796994 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.797006 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.797016 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.797027 | orchestrator | 2026-02-14 03:27:15.797038 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 03:27:15.797049 | orchestrator | Saturday 14 February 2026 03:27:07 +0000 (0:00:00.623) 0:00:29.489 ***** 2026-02-14 03:27:15.797059 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.797070 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.797081 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.797091 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.797102 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.797113 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.797123 | orchestrator | 2026-02-14 03:27:15.797134 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 03:27:15.797145 | orchestrator | Saturday 14 February 2026 03:27:08 +0000 (0:00:00.821) 0:00:30.310 ***** 2026-02-14 03:27:15.797156 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 03:27:15.797167 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 03:27:15.797178 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 03:27:15.797189 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 03:27:15.797199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 03:27:15.797210 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 03:27:15.797221 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 03:27:15.797231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 03:27:15.797242 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-14 03:27:15.797252 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 03:27:15.797263 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 03:27:15.797274 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 03:27:15.797284 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-14 03:27:15.797295 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 03:27:15.797306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 03:27:15.797316 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-14 03:27:15.797327 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-14 03:27:15.797352 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 03:27:15.797388 | orchestrator | 2026-02-14 03:27:15.797407 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 03:27:15.797425 | orchestrator | Saturday 14 February 2026 03:27:10 +0000 (0:00:01.584) 0:00:31.895 ***** 2026-02-14 03:27:15.797444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 03:27:15.797461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 03:27:15.797478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 03:27:15.797494 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.797511 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 03:27:15.797527 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 03:27:15.797545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 03:27:15.797584 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.797604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 03:27:15.797623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 03:27:15.797642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 03:27:15.797660 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.797671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 03:27:15.797681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 03:27:15.797703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 03:27:15.797714 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.797724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 03:27:15.797735 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 03:27:15.797745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 03:27:15.797756 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.797766 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 03:27:15.797777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 03:27:15.797787 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 03:27:15.797798 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.797808 | orchestrator | 2026-02-14 03:27:15.797819 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 03:27:15.797830 | orchestrator | Saturday 14 February 2026 03:27:11 +0000 (0:00:00.934) 0:00:32.830 ***** 2026-02-14 03:27:15.797841 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:15.797851 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:15.797862 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:15.797873 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:27:15.797884 | orchestrator | 2026-02-14 03:27:15.797895 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 03:27:15.797907 | orchestrator | Saturday 14 February 2026 03:27:12 +0000 (0:00:01.022) 0:00:33.853 ***** 2026-02-14 03:27:15.797918 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.797929 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.797939 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.797950 | orchestrator | 2026-02-14 03:27:15.797960 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 03:27:15.797971 | orchestrator | Saturday 14 February 2026 03:27:12 +0000 (0:00:00.367) 0:00:34.220 ***** 2026-02-14 03:27:15.797982 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.797993 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.798003 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.798014 | orchestrator | 2026-02-14 03:27:15.798093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 03:27:15.798105 | orchestrator | Saturday 14 February 2026 03:27:13 +0000 (0:00:00.374) 0:00:34.595 ***** 2026-02-14 03:27:15.798116 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.798127 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:15.798137 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:15.798148 | orchestrator | 2026-02-14 03:27:15.798159 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 03:27:15.798170 | orchestrator | Saturday 14 February 2026 03:27:13 +0000 (0:00:00.333) 0:00:34.928 ***** 2026-02-14 03:27:15.798181 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:15.798191 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:15.798202 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:15.798213 | orchestrator | 2026-02-14 03:27:15.798224 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 03:27:15.798235 | orchestrator | Saturday 14 February 2026 03:27:14 +0000 (0:00:00.788) 0:00:35.716 ***** 2026-02-14 03:27:15.798245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:27:15.798256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:27:15.798267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:27:15.798278 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.798289 | orchestrator | 2026-02-14 03:27:15.798300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 03:27:15.798318 | orchestrator | Saturday 14 February 2026 03:27:14 +0000 (0:00:00.385) 0:00:36.102 ***** 2026-02-14 03:27:15.798329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:27:15.798340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:27:15.798350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:27:15.798384 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.798395 | orchestrator | 2026-02-14 03:27:15.798406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 03:27:15.798417 | orchestrator | Saturday 14 February 2026 03:27:14 +0000 (0:00:00.387) 0:00:36.489 ***** 2026-02-14 03:27:15.798436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:27:15.798447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:27:15.798458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:27:15.798469 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:15.798480 | orchestrator | 2026-02-14 03:27:15.798491 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 03:27:15.798502 | orchestrator | Saturday 14 February 2026 03:27:15 +0000 (0:00:00.427) 0:00:36.917 ***** 2026-02-14 03:27:15.798513 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:15.798524 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:15.798535 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:15.798546 | orchestrator | 2026-02-14 03:27:15.798557 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 03:27:15.798577 | orchestrator | Saturday 14 February 2026 03:27:15 +0000 (0:00:00.404) 0:00:37.321 ***** 2026-02-14 03:27:35.224691 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 03:27:35.224799 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 03:27:35.224815 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 03:27:35.224828 | orchestrator | 2026-02-14 03:27:35.224840 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 03:27:35.224852 | orchestrator | Saturday 14 February 2026 03:27:16 +0000 (0:00:01.083) 0:00:38.405 ***** 2026-02-14 03:27:35.224863 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:27:35.224875 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:27:35.224886 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:27:35.224897 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 03:27:35.224908 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 03:27:35.224919 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 03:27:35.224930 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 03:27:35.224941 | orchestrator | 2026-02-14 03:27:35.224952 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 03:27:35.224963 | orchestrator | Saturday 14 February 2026 03:27:17 +0000 (0:00:00.790) 0:00:39.195 ***** 2026-02-14 03:27:35.224974 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:27:35.224984 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:27:35.224995 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:27:35.225006 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 03:27:35.225017 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 03:27:35.225028 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 03:27:35.225039 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 03:27:35.225050 | orchestrator | 2026-02-14 03:27:35.225060 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:27:35.225097 | orchestrator | Saturday 14 February 2026 03:27:19 +0000 (0:00:01.865) 0:00:41.060 ***** 2026-02-14 03:27:35.225110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:27:35.225122 | orchestrator | 2026-02-14 03:27:35.225133 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:27:35.225144 | orchestrator | Saturday 14 February 2026 03:27:20 +0000 (0:00:01.213) 0:00:42.274 ***** 2026-02-14 03:27:35.225155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:27:35.225166 | orchestrator | 2026-02-14 03:27:35.225176 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:27:35.225187 | orchestrator | Saturday 14 February 2026 03:27:21 +0000 (0:00:01.208) 0:00:43.482 ***** 2026-02-14 03:27:35.225199 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.225213 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.225226 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.225238 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:35.225251 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:35.225263 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:35.225275 | orchestrator | 2026-02-14 03:27:35.225288 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:27:35.225332 | orchestrator | Saturday 14 February 2026 03:27:23 +0000 (0:00:01.217) 0:00:44.700 ***** 2026-02-14 03:27:35.225345 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.225356 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.225369 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.225382 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.225394 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.225406 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.225418 | orchestrator | 2026-02-14 03:27:35.225431 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:27:35.225443 | orchestrator | Saturday 14 February 2026 03:27:23 +0000 (0:00:00.723) 0:00:45.423 ***** 2026-02-14 03:27:35.225453 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.225464 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.225475 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.225486 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.225496 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.225507 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.225518 | orchestrator | 2026-02-14 03:27:35.225544 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:27:35.225556 | orchestrator | Saturday 14 February 2026 03:27:24 +0000 (0:00:00.890) 0:00:46.314 ***** 2026-02-14 03:27:35.225567 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.225578 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.225589 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.225599 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.225610 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.225621 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.225631 | orchestrator | 2026-02-14 03:27:35.225642 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:27:35.225653 | orchestrator | Saturday 14 February 2026 03:27:25 +0000 (0:00:00.698) 0:00:47.013 ***** 2026-02-14 03:27:35.225664 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.225675 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.225703 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.225714 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:35.225725 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:35.225736 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:35.225746 | orchestrator | 2026-02-14 03:27:35.225757 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:27:35.225776 | orchestrator | Saturday 14 February 2026 03:27:26 +0000 (0:00:01.283) 0:00:48.296 ***** 2026-02-14 03:27:35.225787 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.225798 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.225809 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.225819 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.225830 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.225841 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.225852 | orchestrator | 2026-02-14 03:27:35.225863 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:27:35.225873 | orchestrator | Saturday 14 February 2026 03:27:27 +0000 (0:00:00.707) 0:00:49.004 ***** 2026-02-14 03:27:35.225884 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.225895 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.225906 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.225916 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.225927 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.225938 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.225948 | orchestrator | 2026-02-14 03:27:35.225959 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:27:35.225970 | orchestrator | Saturday 14 February 2026 03:27:28 +0000 (0:00:00.813) 0:00:49.817 ***** 2026-02-14 03:27:35.225981 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.225992 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.226002 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.226013 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:35.226087 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:35.226098 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:35.226109 | orchestrator | 2026-02-14 03:27:35.226120 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:27:35.226164 | orchestrator | Saturday 14 February 2026 03:27:29 +0000 (0:00:01.033) 0:00:50.851 ***** 2026-02-14 03:27:35.226177 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.226188 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.226198 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.226209 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:35.226220 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:35.226230 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:35.226241 | orchestrator | 2026-02-14 03:27:35.226252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:27:35.226263 | orchestrator | Saturday 14 February 2026 03:27:30 +0000 (0:00:01.266) 0:00:52.118 ***** 2026-02-14 03:27:35.226274 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.226285 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.226317 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.226329 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.226340 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.226351 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.226467 | orchestrator | 2026-02-14 03:27:35.226481 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:27:35.226492 | orchestrator | Saturday 14 February 2026 03:27:31 +0000 (0:00:00.629) 0:00:52.747 ***** 2026-02-14 03:27:35.226503 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.226514 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.226525 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.226535 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:27:35.226546 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:27:35.226557 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:27:35.226595 | orchestrator | 2026-02-14 03:27:35.226634 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:27:35.226646 | orchestrator | Saturday 14 February 2026 03:27:32 +0000 (0:00:00.835) 0:00:53.583 ***** 2026-02-14 03:27:35.226657 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.226692 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.226715 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.226726 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.226737 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.226748 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.226759 | orchestrator | 2026-02-14 03:27:35.226770 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:27:35.226781 | orchestrator | Saturday 14 February 2026 03:27:32 +0000 (0:00:00.589) 0:00:54.173 ***** 2026-02-14 03:27:35.226792 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.226802 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.226813 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.226824 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.226880 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.226893 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.226904 | orchestrator | 2026-02-14 03:27:35.226915 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:27:35.226926 | orchestrator | Saturday 14 February 2026 03:27:33 +0000 (0:00:00.837) 0:00:55.010 ***** 2026-02-14 03:27:35.226937 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:27:35.226948 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:27:35.226958 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:27:35.226969 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.226980 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.226998 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.227010 | orchestrator | 2026-02-14 03:27:35.227020 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:27:35.227031 | orchestrator | Saturday 14 February 2026 03:27:34 +0000 (0:00:00.627) 0:00:55.638 ***** 2026-02-14 03:27:35.227042 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.227053 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:27:35.227063 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:27:35.227074 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:27:35.227097 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:27:35.227108 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:27:35.227119 | orchestrator | 2026-02-14 03:27:35.227130 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:27:35.227141 | orchestrator | Saturday 14 February 2026 03:27:34 +0000 (0:00:00.838) 0:00:56.476 ***** 2026-02-14 03:27:35.227151 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:27:35.227173 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.992013 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.992171 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.992188 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.992200 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.992212 | orchestrator | 2026-02-14 03:28:47.992225 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:28:47.992247 | orchestrator | Saturday 14 February 2026 03:27:35 +0000 (0:00:00.576) 0:00:57.053 ***** 2026-02-14 03:28:47.992264 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.992295 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.992314 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.992332 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:28:47.992351 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:28:47.992369 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:28:47.992387 | orchestrator | 2026-02-14 03:28:47.992406 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:28:47.992426 | orchestrator | Saturday 14 February 2026 03:27:36 +0000 (0:00:00.829) 0:00:57.882 ***** 2026-02-14 03:28:47.992444 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:28:47.992462 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:28:47.992473 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:28:47.992484 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:28:47.992495 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:28:47.992506 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:28:47.992540 | orchestrator | 2026-02-14 03:28:47.992552 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:28:47.992563 | orchestrator | Saturday 14 February 2026 03:27:36 +0000 (0:00:00.615) 0:00:58.498 ***** 2026-02-14 03:28:47.992574 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:28:47.992584 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:28:47.992595 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:28:47.992606 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:28:47.992616 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:28:47.992627 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:28:47.992638 | orchestrator | 2026-02-14 03:28:47.992649 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 03:28:47.992660 | orchestrator | Saturday 14 February 2026 03:27:38 +0000 (0:00:01.282) 0:00:59.780 ***** 2026-02-14 03:28:47.992671 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:28:47.992682 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:28:47.992692 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:28:47.992703 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:28:47.992713 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:28:47.992724 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:28:47.992734 | orchestrator | 2026-02-14 03:28:47.992745 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 03:28:47.992756 | orchestrator | Saturday 14 February 2026 03:27:39 +0000 (0:00:01.744) 0:01:01.525 ***** 2026-02-14 03:28:47.992767 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:28:47.992777 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:28:47.992788 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:28:47.992798 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:28:47.992809 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:28:47.992820 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:28:47.992830 | orchestrator | 2026-02-14 03:28:47.992841 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 03:28:47.992852 | orchestrator | Saturday 14 February 2026 03:27:42 +0000 (0:00:02.103) 0:01:03.629 ***** 2026-02-14 03:28:47.992864 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:28:47.992876 | orchestrator | 2026-02-14 03:28:47.992887 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 03:28:47.992897 | orchestrator | Saturday 14 February 2026 03:27:43 +0000 (0:00:01.442) 0:01:05.071 ***** 2026-02-14 03:28:47.992908 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.992918 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.992929 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.992940 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.992950 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.992961 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.992971 | orchestrator | 2026-02-14 03:28:47.992982 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 03:28:47.992993 | orchestrator | Saturday 14 February 2026 03:27:44 +0000 (0:00:00.644) 0:01:05.716 ***** 2026-02-14 03:28:47.993003 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.993014 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.993025 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.993035 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.993046 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.993056 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.993135 | orchestrator | 2026-02-14 03:28:47.993147 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 03:28:47.993158 | orchestrator | Saturday 14 February 2026 03:27:44 +0000 (0:00:00.803) 0:01:06.519 ***** 2026-02-14 03:28:47.993168 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993194 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993214 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993225 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993237 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993248 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993259 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993270 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993281 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 03:28:47.993312 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993324 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993335 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 03:28:47.993346 | orchestrator | 2026-02-14 03:28:47.993357 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 03:28:47.993367 | orchestrator | Saturday 14 February 2026 03:27:46 +0000 (0:00:01.421) 0:01:07.941 ***** 2026-02-14 03:28:47.993378 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:28:47.993389 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:28:47.993400 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:28:47.993411 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:28:47.993421 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:28:47.993432 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:28:47.993443 | orchestrator | 2026-02-14 03:28:47.993454 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 03:28:47.993465 | orchestrator | Saturday 14 February 2026 03:27:47 +0000 (0:00:01.133) 0:01:09.075 ***** 2026-02-14 03:28:47.993475 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.993486 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.993497 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.993508 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.993518 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.993529 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.993540 | orchestrator | 2026-02-14 03:28:47.993551 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 03:28:47.993562 | orchestrator | Saturday 14 February 2026 03:27:48 +0000 (0:00:00.608) 0:01:09.683 ***** 2026-02-14 03:28:47.993572 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.993630 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.993644 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.993655 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.993666 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.993677 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.993688 | orchestrator | 2026-02-14 03:28:47.993699 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 03:28:47.993710 | orchestrator | Saturday 14 February 2026 03:27:48 +0000 (0:00:00.794) 0:01:10.478 ***** 2026-02-14 03:28:47.993721 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.993732 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.993743 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.993754 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.993764 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:28:47.993775 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:28:47.993786 | orchestrator | 2026-02-14 03:28:47.993797 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 03:28:47.993808 | orchestrator | Saturday 14 February 2026 03:27:49 +0000 (0:00:00.584) 0:01:11.063 ***** 2026-02-14 03:28:47.993827 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:28:47.993839 | orchestrator | 2026-02-14 03:28:47.993850 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 03:28:47.993860 | orchestrator | Saturday 14 February 2026 03:27:50 +0000 (0:00:01.270) 0:01:12.333 ***** 2026-02-14 03:28:47.993871 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:28:47.993882 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:28:47.993893 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:28:47.993904 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:28:47.993914 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:28:47.993925 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:28:47.993936 | orchestrator | 2026-02-14 03:28:47.993947 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 03:28:47.993958 | orchestrator | Saturday 14 February 2026 03:28:47 +0000 (0:00:56.507) 0:02:08.841 ***** 2026-02-14 03:28:47.993969 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:28:47.993980 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:28:47.993991 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:28:47.994001 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:28:47.994012 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:28:47.994114 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:28:47.994126 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:28:47.994137 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:28:47.994148 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:28:47.994158 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:28:47.994176 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:28:47.994187 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:28:47.994198 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:28:47.994209 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:28:47.994220 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:28:47.994230 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:28:47.994241 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:28:47.994252 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:28:47.994263 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:28:47.994282 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.160328 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 03:29:11.160436 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 03:29:11.160450 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 03:29:11.160461 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.160472 | orchestrator | 2026-02-14 03:29:11.160483 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 03:29:11.160493 | orchestrator | Saturday 14 February 2026 03:28:47 +0000 (0:00:00.677) 0:02:09.518 ***** 2026-02-14 03:29:11.160503 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.160512 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.160522 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.160532 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.160541 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.160575 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.160585 | orchestrator | 2026-02-14 03:29:11.160594 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 03:29:11.160604 | orchestrator | Saturday 14 February 2026 03:28:48 +0000 (0:00:00.811) 0:02:10.329 ***** 2026-02-14 03:29:11.160613 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.160623 | orchestrator | 2026-02-14 03:29:11.160632 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 03:29:11.160642 | orchestrator | Saturday 14 February 2026 03:28:48 +0000 (0:00:00.156) 0:02:10.486 ***** 2026-02-14 03:29:11.160651 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.160661 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.160670 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.160680 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.160689 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.160698 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.160708 | orchestrator | 2026-02-14 03:29:11.160717 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 03:29:11.160727 | orchestrator | Saturday 14 February 2026 03:28:49 +0000 (0:00:00.626) 0:02:11.112 ***** 2026-02-14 03:29:11.160736 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.160746 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.160755 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.160764 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.160774 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.160783 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.160792 | orchestrator | 2026-02-14 03:29:11.160802 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 03:29:11.160811 | orchestrator | Saturday 14 February 2026 03:28:50 +0000 (0:00:00.843) 0:02:11.956 ***** 2026-02-14 03:29:11.160821 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.160830 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.160840 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.160850 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.160859 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.160869 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.160880 | orchestrator | 2026-02-14 03:29:11.160891 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 03:29:11.160908 | orchestrator | Saturday 14 February 2026 03:28:51 +0000 (0:00:00.631) 0:02:12.587 ***** 2026-02-14 03:29:11.160925 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:11.160944 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:11.160959 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:11.160972 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:29:11.160986 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:29:11.161034 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:29:11.161054 | orchestrator | 2026-02-14 03:29:11.161071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 03:29:11.161087 | orchestrator | Saturday 14 February 2026 03:28:54 +0000 (0:00:03.518) 0:02:16.106 ***** 2026-02-14 03:29:11.161102 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:11.161117 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:11.161133 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:11.161149 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:29:11.161164 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:29:11.161179 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:29:11.161193 | orchestrator | 2026-02-14 03:29:11.161208 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 03:29:11.161225 | orchestrator | Saturday 14 February 2026 03:28:55 +0000 (0:00:00.623) 0:02:16.730 ***** 2026-02-14 03:29:11.161243 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:29:11.161261 | orchestrator | 2026-02-14 03:29:11.161363 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 03:29:11.161390 | orchestrator | Saturday 14 February 2026 03:28:56 +0000 (0:00:01.286) 0:02:18.016 ***** 2026-02-14 03:29:11.161400 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.161410 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.161420 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.161429 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.161453 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.161463 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.161473 | orchestrator | 2026-02-14 03:29:11.161483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 03:29:11.161492 | orchestrator | Saturday 14 February 2026 03:28:57 +0000 (0:00:00.804) 0:02:18.821 ***** 2026-02-14 03:29:11.161502 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.161512 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.161521 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.161530 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.161540 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.161549 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.161559 | orchestrator | 2026-02-14 03:29:11.161569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 03:29:11.161578 | orchestrator | Saturday 14 February 2026 03:28:57 +0000 (0:00:00.605) 0:02:19.426 ***** 2026-02-14 03:29:11.161588 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.161618 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.161629 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.161639 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.161648 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.161658 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.161668 | orchestrator | 2026-02-14 03:29:11.161677 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 03:29:11.161687 | orchestrator | Saturday 14 February 2026 03:28:58 +0000 (0:00:00.859) 0:02:20.286 ***** 2026-02-14 03:29:11.161697 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.161711 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.161727 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.161743 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.161759 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.161775 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.161790 | orchestrator | 2026-02-14 03:29:11.161805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 03:29:11.161821 | orchestrator | Saturday 14 February 2026 03:28:59 +0000 (0:00:00.607) 0:02:20.893 ***** 2026-02-14 03:29:11.161836 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.161853 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.161869 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.161886 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.161902 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.161919 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.161932 | orchestrator | 2026-02-14 03:29:11.161949 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 03:29:11.161965 | orchestrator | Saturday 14 February 2026 03:29:00 +0000 (0:00:00.848) 0:02:21.741 ***** 2026-02-14 03:29:11.161981 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.162114 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.162141 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.162159 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.162176 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.162192 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.162209 | orchestrator | 2026-02-14 03:29:11.162219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 03:29:11.162229 | orchestrator | Saturday 14 February 2026 03:29:00 +0000 (0:00:00.600) 0:02:22.342 ***** 2026-02-14 03:29:11.162249 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.162259 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.162276 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.162292 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.162308 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.162323 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.162337 | orchestrator | 2026-02-14 03:29:11.162352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 03:29:11.162367 | orchestrator | Saturday 14 February 2026 03:29:01 +0000 (0:00:00.873) 0:02:23.216 ***** 2026-02-14 03:29:11.162382 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:11.162398 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:11.162413 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:11.162429 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:11.162444 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:11.162459 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:11.162475 | orchestrator | 2026-02-14 03:29:11.162489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 03:29:11.162505 | orchestrator | Saturday 14 February 2026 03:29:02 +0000 (0:00:00.649) 0:02:23.865 ***** 2026-02-14 03:29:11.162520 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:11.162536 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:11.162552 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:11.162569 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:29:11.162585 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:29:11.162602 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:29:11.162618 | orchestrator | 2026-02-14 03:29:11.162634 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 03:29:11.162649 | orchestrator | Saturday 14 February 2026 03:29:03 +0000 (0:00:01.280) 0:02:25.146 ***** 2026-02-14 03:29:11.162661 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:29:11.162672 | orchestrator | 2026-02-14 03:29:11.162682 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 03:29:11.162691 | orchestrator | Saturday 14 February 2026 03:29:04 +0000 (0:00:01.257) 0:02:26.403 ***** 2026-02-14 03:29:11.162701 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-14 03:29:11.162711 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-14 03:29:11.162720 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162730 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-14 03:29:11.162739 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-14 03:29:11.162749 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162758 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162777 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-14 03:29:11.162787 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-14 03:29:11.162796 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:11.162806 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162815 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:11.162825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162834 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:11.162844 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-14 03:29:11.162853 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:11.162863 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:11.162888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:16.526769 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:16.526909 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:16.526924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-14 03:29:16.526934 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.526944 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:16.527033 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.527046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:16.527057 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-14 03:29:16.527067 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527078 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.527088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.527098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527108 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.527118 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-14 03:29:16.527127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527139 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527148 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527158 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527168 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527178 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-14 03:29:16.527197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527207 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527226 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527236 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527256 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-14 03:29:16.527266 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527285 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527295 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527317 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-14 03:29:16.527328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527351 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527363 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527375 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527387 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 03:29:16.527398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527410 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527421 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527452 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527463 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 03:29:16.527475 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527524 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527536 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 03:29:16.527564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527580 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527596 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527610 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527627 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 03:29:16.527681 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527701 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-14 03:29:16.527712 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527722 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527732 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 03:29:16.527751 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-14 03:29:16.527761 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-14 03:29:16.527771 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-14 03:29:16.527781 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527800 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 03:29:16.527810 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-14 03:29:16.527820 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-14 03:29:16.527830 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-14 03:29:16.527839 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-14 03:29:16.527849 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-14 03:29:16.527859 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-14 03:29:16.527869 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-14 03:29:16.527878 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-14 03:29:16.527888 | orchestrator | 2026-02-14 03:29:16.527899 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 03:29:16.527909 | orchestrator | Saturday 14 February 2026 03:29:11 +0000 (0:00:06.267) 0:02:32.670 ***** 2026-02-14 03:29:16.527919 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:16.527929 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:16.527938 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:16.527949 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:29:16.527969 | orchestrator | 2026-02-14 03:29:16.528000 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 03:29:16.528011 | orchestrator | Saturday 14 February 2026 03:29:12 +0000 (0:00:01.037) 0:02:33.707 ***** 2026-02-14 03:29:16.528022 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528032 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528042 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528051 | orchestrator | 2026-02-14 03:29:16.528061 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 03:29:16.528071 | orchestrator | Saturday 14 February 2026 03:29:12 +0000 (0:00:00.720) 0:02:34.428 ***** 2026-02-14 03:29:16.528081 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528091 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528100 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:16.528110 | orchestrator | 2026-02-14 03:29:16.528120 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 03:29:16.528130 | orchestrator | Saturday 14 February 2026 03:29:14 +0000 (0:00:01.225) 0:02:35.654 ***** 2026-02-14 03:29:16.528140 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:16.528149 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:16.528159 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:16.528168 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:16.528178 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:16.528188 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:16.528197 | orchestrator | 2026-02-14 03:29:16.528207 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 03:29:16.528223 | orchestrator | Saturday 14 February 2026 03:29:14 +0000 (0:00:00.885) 0:02:36.540 ***** 2026-02-14 03:29:16.528233 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:16.528243 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:16.528253 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:16.528262 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:16.528272 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:16.528282 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:16.528291 | orchestrator | 2026-02-14 03:29:16.528301 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 03:29:16.528311 | orchestrator | Saturday 14 February 2026 03:29:15 +0000 (0:00:00.607) 0:02:37.147 ***** 2026-02-14 03:29:16.528321 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:16.528330 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:16.528340 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:16.528350 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:16.528360 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:16.528369 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:16.528379 | orchestrator | 2026-02-14 03:29:16.528395 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 03:29:29.778342 | orchestrator | Saturday 14 February 2026 03:29:16 +0000 (0:00:00.904) 0:02:38.052 ***** 2026-02-14 03:29:29.778458 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.778475 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.778487 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.778498 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.778509 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.778520 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.778553 | orchestrator | 2026-02-14 03:29:29.778566 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 03:29:29.778578 | orchestrator | Saturday 14 February 2026 03:29:17 +0000 (0:00:00.647) 0:02:38.700 ***** 2026-02-14 03:29:29.778588 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.778599 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.778610 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.778621 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.778631 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.778642 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.778653 | orchestrator | 2026-02-14 03:29:29.778664 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 03:29:29.778677 | orchestrator | Saturday 14 February 2026 03:29:17 +0000 (0:00:00.814) 0:02:39.514 ***** 2026-02-14 03:29:29.778688 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.778698 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.778709 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.778720 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.778730 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.778741 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.778752 | orchestrator | 2026-02-14 03:29:29.778763 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 03:29:29.778774 | orchestrator | Saturday 14 February 2026 03:29:18 +0000 (0:00:00.644) 0:02:40.159 ***** 2026-02-14 03:29:29.778785 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.778795 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.778806 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.778817 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.778827 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.778838 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.778849 | orchestrator | 2026-02-14 03:29:29.778860 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 03:29:29.778871 | orchestrator | Saturday 14 February 2026 03:29:19 +0000 (0:00:00.831) 0:02:40.990 ***** 2026-02-14 03:29:29.778882 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.778892 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.778903 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.778914 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.778924 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.778970 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.778983 | orchestrator | 2026-02-14 03:29:29.778995 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 03:29:29.779006 | orchestrator | Saturday 14 February 2026 03:29:20 +0000 (0:00:00.644) 0:02:41.635 ***** 2026-02-14 03:29:29.779016 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779028 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779039 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779050 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:29.779061 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:29.779072 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:29.779083 | orchestrator | 2026-02-14 03:29:29.779094 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 03:29:29.779105 | orchestrator | Saturday 14 February 2026 03:29:22 +0000 (0:00:02.896) 0:02:44.531 ***** 2026-02-14 03:29:29.779116 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:29.779126 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:29.779137 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:29.779148 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779159 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779169 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779180 | orchestrator | 2026-02-14 03:29:29.779191 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 03:29:29.779210 | orchestrator | Saturday 14 February 2026 03:29:23 +0000 (0:00:00.582) 0:02:45.114 ***** 2026-02-14 03:29:29.779221 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:29.779231 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:29.779242 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:29.779253 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779264 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779274 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779285 | orchestrator | 2026-02-14 03:29:29.779296 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 03:29:29.779307 | orchestrator | Saturday 14 February 2026 03:29:24 +0000 (0:00:00.875) 0:02:45.989 ***** 2026-02-14 03:29:29.779317 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.779328 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.779355 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.779366 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779377 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779388 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779399 | orchestrator | 2026-02-14 03:29:29.779411 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 03:29:29.779422 | orchestrator | Saturday 14 February 2026 03:29:25 +0000 (0:00:00.627) 0:02:46.616 ***** 2026-02-14 03:29:29.779433 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:29.779446 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:29.779457 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 03:29:29.779468 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779497 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779508 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779519 | orchestrator | 2026-02-14 03:29:29.779530 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 03:29:29.779541 | orchestrator | Saturday 14 February 2026 03:29:25 +0000 (0:00:00.858) 0:02:47.475 ***** 2026-02-14 03:29:29.779554 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-14 03:29:29.779568 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-14 03:29:29.779580 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-14 03:29:29.779592 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-14 03:29:29.779603 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.779614 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-14 03:29:29.779631 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-14 03:29:29.779643 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.779653 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.779664 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779675 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779686 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779697 | orchestrator | 2026-02-14 03:29:29.779708 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 03:29:29.779718 | orchestrator | Saturday 14 February 2026 03:29:26 +0000 (0:00:00.665) 0:02:48.141 ***** 2026-02-14 03:29:29.779729 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.779740 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.779751 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.779762 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779772 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779783 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779794 | orchestrator | 2026-02-14 03:29:29.779804 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 03:29:29.779815 | orchestrator | Saturday 14 February 2026 03:29:27 +0000 (0:00:00.887) 0:02:49.029 ***** 2026-02-14 03:29:29.779826 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.779837 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.779847 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.779858 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779868 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.779879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.779890 | orchestrator | 2026-02-14 03:29:29.779901 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 03:29:29.779912 | orchestrator | Saturday 14 February 2026 03:29:28 +0000 (0:00:00.591) 0:02:49.621 ***** 2026-02-14 03:29:29.779928 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.779955 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.779966 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.779976 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.779995 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.780013 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.780031 | orchestrator | 2026-02-14 03:29:29.780058 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 03:29:29.780077 | orchestrator | Saturday 14 February 2026 03:29:28 +0000 (0:00:00.875) 0:02:50.496 ***** 2026-02-14 03:29:29.780094 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:29.780111 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:29.780130 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:29.780149 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:29.780166 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:29.780184 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:29.780201 | orchestrator | 2026-02-14 03:29:29.780219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 03:29:29.780248 | orchestrator | Saturday 14 February 2026 03:29:29 +0000 (0:00:00.804) 0:02:51.301 ***** 2026-02-14 03:29:47.062472 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.062608 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:47.062632 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:47.062652 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.062671 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:47.062690 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:47.062743 | orchestrator | 2026-02-14 03:29:47.062766 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 03:29:47.062788 | orchestrator | Saturday 14 February 2026 03:29:30 +0000 (0:00:00.625) 0:02:51.926 ***** 2026-02-14 03:29:47.062808 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:47.062829 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:47.062848 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:47.062867 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.062921 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:47.062942 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:47.062962 | orchestrator | 2026-02-14 03:29:47.062983 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 03:29:47.063003 | orchestrator | Saturday 14 February 2026 03:29:31 +0000 (0:00:00.816) 0:02:52.743 ***** 2026-02-14 03:29:47.063021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:29:47.063041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:29:47.063061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:29:47.063083 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.063103 | orchestrator | 2026-02-14 03:29:47.063123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 03:29:47.063144 | orchestrator | Saturday 14 February 2026 03:29:31 +0000 (0:00:00.435) 0:02:53.179 ***** 2026-02-14 03:29:47.063164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:29:47.063184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:29:47.063204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:29:47.063225 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.063243 | orchestrator | 2026-02-14 03:29:47.063261 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 03:29:47.063279 | orchestrator | Saturday 14 February 2026 03:29:32 +0000 (0:00:00.439) 0:02:53.618 ***** 2026-02-14 03:29:47.063297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:29:47.063316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:29:47.063332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:29:47.063351 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.063368 | orchestrator | 2026-02-14 03:29:47.063386 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 03:29:47.063405 | orchestrator | Saturday 14 February 2026 03:29:32 +0000 (0:00:00.418) 0:02:54.037 ***** 2026-02-14 03:29:47.063423 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:29:47.063441 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:29:47.063459 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:29:47.063476 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.063496 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:47.063516 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:47.063535 | orchestrator | 2026-02-14 03:29:47.063555 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 03:29:47.063574 | orchestrator | Saturday 14 February 2026 03:29:33 +0000 (0:00:00.637) 0:02:54.674 ***** 2026-02-14 03:29:47.063593 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 03:29:47.063612 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 03:29:47.063631 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 03:29:47.063650 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-14 03:29:47.063661 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.063672 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-14 03:29:47.063683 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:47.063694 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-14 03:29:47.063705 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:47.063715 | orchestrator | 2026-02-14 03:29:47.063726 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 03:29:47.063754 | orchestrator | Saturday 14 February 2026 03:29:34 +0000 (0:00:01.828) 0:02:56.502 ***** 2026-02-14 03:29:47.063765 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:29:47.063776 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:29:47.063787 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:29:47.063798 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:29:47.063809 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:29:47.063819 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:29:47.063830 | orchestrator | 2026-02-14 03:29:47.063841 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:29:47.063852 | orchestrator | Saturday 14 February 2026 03:29:37 +0000 (0:00:02.583) 0:02:59.086 ***** 2026-02-14 03:29:47.063863 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:29:47.063933 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:29:47.063948 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:29:47.063959 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:29:47.063970 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:29:47.063981 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:29:47.063992 | orchestrator | 2026-02-14 03:29:47.064003 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-14 03:29:47.064014 | orchestrator | Saturday 14 February 2026 03:29:38 +0000 (0:00:01.023) 0:03:00.110 ***** 2026-02-14 03:29:47.064025 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064036 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:47.064046 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:47.064058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:29:47.064070 | orchestrator | 2026-02-14 03:29:47.064081 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-14 03:29:47.064092 | orchestrator | Saturday 14 February 2026 03:29:39 +0000 (0:00:01.082) 0:03:01.193 ***** 2026-02-14 03:29:47.064102 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:29:47.064139 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:29:47.064150 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:29:47.064161 | orchestrator | 2026-02-14 03:29:47.064172 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-14 03:29:47.064183 | orchestrator | Saturday 14 February 2026 03:29:39 +0000 (0:00:00.334) 0:03:01.527 ***** 2026-02-14 03:29:47.064194 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:29:47.064205 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:29:47.064215 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:29:47.064226 | orchestrator | 2026-02-14 03:29:47.064237 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-14 03:29:47.064248 | orchestrator | Saturday 14 February 2026 03:29:41 +0000 (0:00:01.413) 0:03:02.941 ***** 2026-02-14 03:29:47.064258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 03:29:47.064269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 03:29:47.064280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 03:29:47.064291 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.064301 | orchestrator | 2026-02-14 03:29:47.064312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-14 03:29:47.064323 | orchestrator | Saturday 14 February 2026 03:29:42 +0000 (0:00:00.667) 0:03:03.608 ***** 2026-02-14 03:29:47.064334 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:29:47.064345 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:29:47.064356 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:29:47.064366 | orchestrator | 2026-02-14 03:29:47.064377 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-14 03:29:47.064388 | orchestrator | Saturday 14 February 2026 03:29:42 +0000 (0:00:00.354) 0:03:03.962 ***** 2026-02-14 03:29:47.064399 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:29:47.064410 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:29:47.064421 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:29:47.064440 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:29:47.064451 | orchestrator | 2026-02-14 03:29:47.064462 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-14 03:29:47.064473 | orchestrator | Saturday 14 February 2026 03:29:43 +0000 (0:00:01.059) 0:03:05.021 ***** 2026-02-14 03:29:47.064484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:29:47.064495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:29:47.064505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:29:47.064516 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064527 | orchestrator | 2026-02-14 03:29:47.064538 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-14 03:29:47.064548 | orchestrator | Saturday 14 February 2026 03:29:43 +0000 (0:00:00.421) 0:03:05.443 ***** 2026-02-14 03:29:47.064559 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064570 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:47.064581 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:47.064592 | orchestrator | 2026-02-14 03:29:47.064602 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-14 03:29:47.064613 | orchestrator | Saturday 14 February 2026 03:29:44 +0000 (0:00:00.341) 0:03:05.785 ***** 2026-02-14 03:29:47.064624 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064635 | orchestrator | 2026-02-14 03:29:47.064645 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-14 03:29:47.064656 | orchestrator | Saturday 14 February 2026 03:29:44 +0000 (0:00:00.263) 0:03:06.048 ***** 2026-02-14 03:29:47.064667 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064678 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:29:47.064689 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:29:47.064699 | orchestrator | 2026-02-14 03:29:47.064710 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-14 03:29:47.064721 | orchestrator | Saturday 14 February 2026 03:29:44 +0000 (0:00:00.319) 0:03:06.367 ***** 2026-02-14 03:29:47.064732 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064742 | orchestrator | 2026-02-14 03:29:47.064753 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-14 03:29:47.064764 | orchestrator | Saturday 14 February 2026 03:29:45 +0000 (0:00:00.696) 0:03:07.064 ***** 2026-02-14 03:29:47.064775 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064786 | orchestrator | 2026-02-14 03:29:47.064796 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-14 03:29:47.064807 | orchestrator | Saturday 14 February 2026 03:29:45 +0000 (0:00:00.250) 0:03:07.315 ***** 2026-02-14 03:29:47.064818 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064828 | orchestrator | 2026-02-14 03:29:47.064839 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-14 03:29:47.064850 | orchestrator | Saturday 14 February 2026 03:29:45 +0000 (0:00:00.149) 0:03:07.464 ***** 2026-02-14 03:29:47.064867 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064877 | orchestrator | 2026-02-14 03:29:47.064911 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-14 03:29:47.064923 | orchestrator | Saturday 14 February 2026 03:29:46 +0000 (0:00:00.276) 0:03:07.740 ***** 2026-02-14 03:29:47.064933 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.064944 | orchestrator | 2026-02-14 03:29:47.064955 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-14 03:29:47.064966 | orchestrator | Saturday 14 February 2026 03:29:46 +0000 (0:00:00.240) 0:03:07.981 ***** 2026-02-14 03:29:47.064977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:29:47.064988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:29:47.064999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:29:47.065017 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:29:47.065028 | orchestrator | 2026-02-14 03:29:47.065039 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-14 03:29:47.065050 | orchestrator | Saturday 14 February 2026 03:29:46 +0000 (0:00:00.416) 0:03:08.397 ***** 2026-02-14 03:29:47.065068 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.648365 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:30:05.648476 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:30:05.648491 | orchestrator | 2026-02-14 03:30:05.648503 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-14 03:30:05.648514 | orchestrator | Saturday 14 February 2026 03:29:47 +0000 (0:00:00.331) 0:03:08.729 ***** 2026-02-14 03:30:05.648525 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.648535 | orchestrator | 2026-02-14 03:30:05.648544 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-14 03:30:05.648555 | orchestrator | Saturday 14 February 2026 03:29:47 +0000 (0:00:00.219) 0:03:08.949 ***** 2026-02-14 03:30:05.648564 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.648574 | orchestrator | 2026-02-14 03:30:05.648584 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-14 03:30:05.648594 | orchestrator | Saturday 14 February 2026 03:29:47 +0000 (0:00:00.237) 0:03:09.187 ***** 2026-02-14 03:30:05.648604 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.648614 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.648624 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.648634 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:30:05.648644 | orchestrator | 2026-02-14 03:30:05.648654 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-14 03:30:05.648664 | orchestrator | Saturday 14 February 2026 03:29:48 +0000 (0:00:01.070) 0:03:10.258 ***** 2026-02-14 03:30:05.648674 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:30:05.648686 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:30:05.648696 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:30:05.648706 | orchestrator | 2026-02-14 03:30:05.648716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-14 03:30:05.648725 | orchestrator | Saturday 14 February 2026 03:29:49 +0000 (0:00:00.314) 0:03:10.572 ***** 2026-02-14 03:30:05.648735 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:30:05.648745 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:30:05.648755 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:30:05.648765 | orchestrator | 2026-02-14 03:30:05.648775 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-14 03:30:05.648785 | orchestrator | Saturday 14 February 2026 03:29:50 +0000 (0:00:01.450) 0:03:12.023 ***** 2026-02-14 03:30:05.648795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:30:05.648805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:30:05.648815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:30:05.648888 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.648906 | orchestrator | 2026-02-14 03:30:05.648924 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-14 03:30:05.648940 | orchestrator | Saturday 14 February 2026 03:29:51 +0000 (0:00:00.667) 0:03:12.691 ***** 2026-02-14 03:30:05.648957 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:30:05.648974 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:30:05.648989 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:30:05.649005 | orchestrator | 2026-02-14 03:30:05.649021 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-14 03:30:05.649039 | orchestrator | Saturday 14 February 2026 03:29:51 +0000 (0:00:00.355) 0:03:13.047 ***** 2026-02-14 03:30:05.649055 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.649072 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.649089 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.649138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:30:05.649156 | orchestrator | 2026-02-14 03:30:05.649169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-14 03:30:05.649181 | orchestrator | Saturday 14 February 2026 03:29:52 +0000 (0:00:01.034) 0:03:14.081 ***** 2026-02-14 03:30:05.649192 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:30:05.649203 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:30:05.649214 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:30:05.649226 | orchestrator | 2026-02-14 03:30:05.649237 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-14 03:30:05.649248 | orchestrator | Saturday 14 February 2026 03:29:52 +0000 (0:00:00.354) 0:03:14.435 ***** 2026-02-14 03:30:05.649260 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:30:05.649270 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:30:05.649279 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:30:05.649289 | orchestrator | 2026-02-14 03:30:05.649298 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-14 03:30:05.649308 | orchestrator | Saturday 14 February 2026 03:29:54 +0000 (0:00:01.266) 0:03:15.702 ***** 2026-02-14 03:30:05.649317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:30:05.649327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:30:05.649350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:30:05.649360 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.649370 | orchestrator | 2026-02-14 03:30:05.649380 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-14 03:30:05.649389 | orchestrator | Saturday 14 February 2026 03:29:55 +0000 (0:00:00.897) 0:03:16.600 ***** 2026-02-14 03:30:05.649398 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:30:05.649408 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:30:05.649418 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:30:05.649427 | orchestrator | 2026-02-14 03:30:05.649437 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-14 03:30:05.649446 | orchestrator | Saturday 14 February 2026 03:29:55 +0000 (0:00:00.564) 0:03:17.164 ***** 2026-02-14 03:30:05.649456 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.649465 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:30:05.649475 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:30:05.649484 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.649494 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.649503 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.649513 | orchestrator | 2026-02-14 03:30:05.649542 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-14 03:30:05.649552 | orchestrator | Saturday 14 February 2026 03:29:56 +0000 (0:00:00.662) 0:03:17.827 ***** 2026-02-14 03:30:05.649562 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:30:05.649572 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:30:05.649581 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:30:05.649591 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:30:05.649601 | orchestrator | 2026-02-14 03:30:05.649611 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-14 03:30:05.649621 | orchestrator | Saturday 14 February 2026 03:29:57 +0000 (0:00:01.150) 0:03:18.977 ***** 2026-02-14 03:30:05.649630 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:05.649640 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:05.649649 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:05.649659 | orchestrator | 2026-02-14 03:30:05.649668 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-14 03:30:05.649678 | orchestrator | Saturday 14 February 2026 03:29:57 +0000 (0:00:00.343) 0:03:19.321 ***** 2026-02-14 03:30:05.649688 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:05.649705 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:05.649715 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:05.649725 | orchestrator | 2026-02-14 03:30:05.649734 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-14 03:30:05.649744 | orchestrator | Saturday 14 February 2026 03:29:59 +0000 (0:00:01.239) 0:03:20.560 ***** 2026-02-14 03:30:05.649754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 03:30:05.649763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 03:30:05.649773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 03:30:05.649782 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.649792 | orchestrator | 2026-02-14 03:30:05.649802 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-14 03:30:05.649811 | orchestrator | Saturday 14 February 2026 03:30:00 +0000 (0:00:01.091) 0:03:21.652 ***** 2026-02-14 03:30:05.649843 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:05.649854 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:05.649864 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:05.649873 | orchestrator | 2026-02-14 03:30:05.649883 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-14 03:30:05.649893 | orchestrator | 2026-02-14 03:30:05.649903 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:30:05.649912 | orchestrator | Saturday 14 February 2026 03:30:00 +0000 (0:00:00.559) 0:03:22.211 ***** 2026-02-14 03:30:05.649923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:30:05.649933 | orchestrator | 2026-02-14 03:30:05.649943 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:30:05.649953 | orchestrator | Saturday 14 February 2026 03:30:01 +0000 (0:00:00.763) 0:03:22.975 ***** 2026-02-14 03:30:05.649963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:30:05.649972 | orchestrator | 2026-02-14 03:30:05.649982 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:30:05.649992 | orchestrator | Saturday 14 February 2026 03:30:01 +0000 (0:00:00.541) 0:03:23.516 ***** 2026-02-14 03:30:05.650001 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:05.650011 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:05.650082 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:05.650093 | orchestrator | 2026-02-14 03:30:05.650103 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:30:05.650113 | orchestrator | Saturday 14 February 2026 03:30:02 +0000 (0:00:00.731) 0:03:24.248 ***** 2026-02-14 03:30:05.650122 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.650132 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.650142 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.650151 | orchestrator | 2026-02-14 03:30:05.650161 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:30:05.650170 | orchestrator | Saturday 14 February 2026 03:30:03 +0000 (0:00:00.561) 0:03:24.809 ***** 2026-02-14 03:30:05.650180 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.650190 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.650199 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.650209 | orchestrator | 2026-02-14 03:30:05.650218 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:30:05.650228 | orchestrator | Saturday 14 February 2026 03:30:03 +0000 (0:00:00.380) 0:03:25.190 ***** 2026-02-14 03:30:05.650237 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.650247 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.650263 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.650273 | orchestrator | 2026-02-14 03:30:05.650283 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:30:05.650293 | orchestrator | Saturday 14 February 2026 03:30:03 +0000 (0:00:00.327) 0:03:25.517 ***** 2026-02-14 03:30:05.650310 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:05.650320 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:05.650329 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:05.650339 | orchestrator | 2026-02-14 03:30:05.650349 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:30:05.650358 | orchestrator | Saturday 14 February 2026 03:30:04 +0000 (0:00:00.749) 0:03:26.267 ***** 2026-02-14 03:30:05.650368 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.650378 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.650387 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:05.650397 | orchestrator | 2026-02-14 03:30:05.650407 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:30:05.650416 | orchestrator | Saturday 14 February 2026 03:30:05 +0000 (0:00:00.554) 0:03:26.821 ***** 2026-02-14 03:30:05.650426 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:05.650436 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:05.650454 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045006 | orchestrator | 2026-02-14 03:30:27.045117 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:30:27.045133 | orchestrator | Saturday 14 February 2026 03:30:05 +0000 (0:00:00.351) 0:03:27.173 ***** 2026-02-14 03:30:27.045145 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.045157 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.045169 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.045179 | orchestrator | 2026-02-14 03:30:27.045190 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:30:27.045201 | orchestrator | Saturday 14 February 2026 03:30:06 +0000 (0:00:00.713) 0:03:27.887 ***** 2026-02-14 03:30:27.045212 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.045223 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.045233 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.045244 | orchestrator | 2026-02-14 03:30:27.045255 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:30:27.045265 | orchestrator | Saturday 14 February 2026 03:30:07 +0000 (0:00:00.732) 0:03:28.619 ***** 2026-02-14 03:30:27.045276 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045288 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045298 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045309 | orchestrator | 2026-02-14 03:30:27.045320 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:30:27.045330 | orchestrator | Saturday 14 February 2026 03:30:07 +0000 (0:00:00.582) 0:03:29.202 ***** 2026-02-14 03:30:27.045341 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.045352 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.045363 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.045373 | orchestrator | 2026-02-14 03:30:27.045384 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:30:27.045395 | orchestrator | Saturday 14 February 2026 03:30:08 +0000 (0:00:00.345) 0:03:29.548 ***** 2026-02-14 03:30:27.045405 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045416 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045427 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045437 | orchestrator | 2026-02-14 03:30:27.045448 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:30:27.045458 | orchestrator | Saturday 14 February 2026 03:30:08 +0000 (0:00:00.326) 0:03:29.874 ***** 2026-02-14 03:30:27.045473 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045492 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045518 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045540 | orchestrator | 2026-02-14 03:30:27.045559 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:30:27.045576 | orchestrator | Saturday 14 February 2026 03:30:08 +0000 (0:00:00.299) 0:03:30.174 ***** 2026-02-14 03:30:27.045594 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045645 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045667 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045686 | orchestrator | 2026-02-14 03:30:27.045705 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:30:27.045722 | orchestrator | Saturday 14 February 2026 03:30:09 +0000 (0:00:00.553) 0:03:30.728 ***** 2026-02-14 03:30:27.045736 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045749 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045795 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045809 | orchestrator | 2026-02-14 03:30:27.045821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:30:27.045834 | orchestrator | Saturday 14 February 2026 03:30:09 +0000 (0:00:00.334) 0:03:31.062 ***** 2026-02-14 03:30:27.045846 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.045859 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:30:27.045871 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:30:27.045883 | orchestrator | 2026-02-14 03:30:27.045895 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:30:27.045908 | orchestrator | Saturday 14 February 2026 03:30:09 +0000 (0:00:00.302) 0:03:31.365 ***** 2026-02-14 03:30:27.045920 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.045932 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.045943 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.045954 | orchestrator | 2026-02-14 03:30:27.045964 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:30:27.045975 | orchestrator | Saturday 14 February 2026 03:30:10 +0000 (0:00:00.332) 0:03:31.698 ***** 2026-02-14 03:30:27.045985 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.045996 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046007 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046075 | orchestrator | 2026-02-14 03:30:27.046089 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:30:27.046100 | orchestrator | Saturday 14 February 2026 03:30:10 +0000 (0:00:00.578) 0:03:32.277 ***** 2026-02-14 03:30:27.046111 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.046122 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046167 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046179 | orchestrator | 2026-02-14 03:30:27.046215 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-14 03:30:27.046248 | orchestrator | Saturday 14 February 2026 03:30:11 +0000 (0:00:00.600) 0:03:32.877 ***** 2026-02-14 03:30:27.046266 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.046283 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046301 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046319 | orchestrator | 2026-02-14 03:30:27.046339 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-14 03:30:27.046357 | orchestrator | Saturday 14 February 2026 03:30:11 +0000 (0:00:00.343) 0:03:33.220 ***** 2026-02-14 03:30:27.046376 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:30:27.046388 | orchestrator | 2026-02-14 03:30:27.046399 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-14 03:30:27.046410 | orchestrator | Saturday 14 February 2026 03:30:12 +0000 (0:00:00.831) 0:03:34.052 ***** 2026-02-14 03:30:27.046421 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:30:27.046432 | orchestrator | 2026-02-14 03:30:27.046443 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-14 03:30:27.046475 | orchestrator | Saturday 14 February 2026 03:30:12 +0000 (0:00:00.170) 0:03:34.222 ***** 2026-02-14 03:30:27.046487 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 03:30:27.046498 | orchestrator | 2026-02-14 03:30:27.046509 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-14 03:30:27.046520 | orchestrator | Saturday 14 February 2026 03:30:13 +0000 (0:00:00.989) 0:03:35.212 ***** 2026-02-14 03:30:27.046542 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.046553 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046564 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046575 | orchestrator | 2026-02-14 03:30:27.046586 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-14 03:30:27.046597 | orchestrator | Saturday 14 February 2026 03:30:14 +0000 (0:00:00.346) 0:03:35.558 ***** 2026-02-14 03:30:27.046607 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.046618 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046629 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046640 | orchestrator | 2026-02-14 03:30:27.046650 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-14 03:30:27.046661 | orchestrator | Saturday 14 February 2026 03:30:14 +0000 (0:00:00.649) 0:03:36.208 ***** 2026-02-14 03:30:27.046672 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.046683 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:27.046694 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:27.046705 | orchestrator | 2026-02-14 03:30:27.046716 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-14 03:30:27.046727 | orchestrator | Saturday 14 February 2026 03:30:15 +0000 (0:00:01.249) 0:03:37.457 ***** 2026-02-14 03:30:27.046738 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.046749 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:27.046786 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:27.046798 | orchestrator | 2026-02-14 03:30:27.046809 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-14 03:30:27.046820 | orchestrator | Saturday 14 February 2026 03:30:16 +0000 (0:00:00.808) 0:03:38.266 ***** 2026-02-14 03:30:27.046831 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.046842 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:27.046853 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:27.046863 | orchestrator | 2026-02-14 03:30:27.046874 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-14 03:30:27.046885 | orchestrator | Saturday 14 February 2026 03:30:17 +0000 (0:00:00.652) 0:03:38.918 ***** 2026-02-14 03:30:27.046896 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.046907 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.046918 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.046928 | orchestrator | 2026-02-14 03:30:27.046939 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-14 03:30:27.046950 | orchestrator | Saturday 14 February 2026 03:30:18 +0000 (0:00:00.943) 0:03:39.861 ***** 2026-02-14 03:30:27.046961 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.046972 | orchestrator | 2026-02-14 03:30:27.046983 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-14 03:30:27.046994 | orchestrator | Saturday 14 February 2026 03:30:19 +0000 (0:00:01.368) 0:03:41.230 ***** 2026-02-14 03:30:27.047004 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.047015 | orchestrator | 2026-02-14 03:30:27.047026 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-14 03:30:27.047037 | orchestrator | Saturday 14 February 2026 03:30:20 +0000 (0:00:00.719) 0:03:41.950 ***** 2026-02-14 03:30:27.047048 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:30:27.047059 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:30:27.047070 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:30:27.047081 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:30:27.047092 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-14 03:30:27.047103 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:30:27.047114 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:30:27.047125 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-14 03:30:27.047136 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:30:27.047153 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-14 03:30:27.047164 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-14 03:30:27.047175 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-14 03:30:27.047186 | orchestrator | 2026-02-14 03:30:27.047197 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-14 03:30:27.047208 | orchestrator | Saturday 14 February 2026 03:30:23 +0000 (0:00:03.067) 0:03:45.017 ***** 2026-02-14 03:30:27.047219 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.047230 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:27.047247 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:27.047258 | orchestrator | 2026-02-14 03:30:27.047269 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-14 03:30:27.047280 | orchestrator | Saturday 14 February 2026 03:30:24 +0000 (0:00:01.186) 0:03:46.204 ***** 2026-02-14 03:30:27.047291 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.047302 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.047313 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.047324 | orchestrator | 2026-02-14 03:30:27.047335 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-14 03:30:27.047346 | orchestrator | Saturday 14 February 2026 03:30:25 +0000 (0:00:00.569) 0:03:46.774 ***** 2026-02-14 03:30:27.047357 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:30:27.047367 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:30:27.047378 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:30:27.047389 | orchestrator | 2026-02-14 03:30:27.047400 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-14 03:30:27.047411 | orchestrator | Saturday 14 February 2026 03:30:25 +0000 (0:00:00.340) 0:03:47.114 ***** 2026-02-14 03:30:27.047422 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:30:27.047433 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:30:27.047444 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:30:27.047454 | orchestrator | 2026-02-14 03:30:27.047472 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-14 03:31:28.368597 | orchestrator | Saturday 14 February 2026 03:30:27 +0000 (0:00:01.448) 0:03:48.563 ***** 2026-02-14 03:31:28.368728 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:28.368746 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:28.368758 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:28.368770 | orchestrator | 2026-02-14 03:31:28.368782 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-14 03:31:28.368794 | orchestrator | Saturday 14 February 2026 03:30:28 +0000 (0:00:01.373) 0:03:49.937 ***** 2026-02-14 03:31:28.368805 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.368816 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.368827 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.368838 | orchestrator | 2026-02-14 03:31:28.368850 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-14 03:31:28.368861 | orchestrator | Saturday 14 February 2026 03:30:28 +0000 (0:00:00.561) 0:03:50.499 ***** 2026-02-14 03:31:28.368872 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:28.368884 | orchestrator | 2026-02-14 03:31:28.368896 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-14 03:31:28.368907 | orchestrator | Saturday 14 February 2026 03:30:29 +0000 (0:00:00.572) 0:03:51.071 ***** 2026-02-14 03:31:28.368918 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.368929 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.368940 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.368951 | orchestrator | 2026-02-14 03:31:28.368962 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-14 03:31:28.368973 | orchestrator | Saturday 14 February 2026 03:30:29 +0000 (0:00:00.318) 0:03:51.390 ***** 2026-02-14 03:31:28.368984 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.369020 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.369032 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.369043 | orchestrator | 2026-02-14 03:31:28.369054 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-14 03:31:28.369065 | orchestrator | Saturday 14 February 2026 03:30:30 +0000 (0:00:00.548) 0:03:51.938 ***** 2026-02-14 03:31:28.369076 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:28.369087 | orchestrator | 2026-02-14 03:31:28.369098 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-14 03:31:28.369109 | orchestrator | Saturday 14 February 2026 03:30:30 +0000 (0:00:00.577) 0:03:52.516 ***** 2026-02-14 03:31:28.369120 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:28.369131 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:28.369142 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:28.369155 | orchestrator | 2026-02-14 03:31:28.369167 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-14 03:31:28.369179 | orchestrator | Saturday 14 February 2026 03:30:32 +0000 (0:00:01.771) 0:03:54.288 ***** 2026-02-14 03:31:28.369192 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:28.369205 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:28.369218 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:28.369230 | orchestrator | 2026-02-14 03:31:28.369242 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-14 03:31:28.369254 | orchestrator | Saturday 14 February 2026 03:30:34 +0000 (0:00:01.460) 0:03:55.749 ***** 2026-02-14 03:31:28.369267 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:28.369279 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:28.369291 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:28.369303 | orchestrator | 2026-02-14 03:31:28.369315 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-14 03:31:28.369328 | orchestrator | Saturday 14 February 2026 03:30:35 +0000 (0:00:01.751) 0:03:57.500 ***** 2026-02-14 03:31:28.369340 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:28.369353 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:28.369365 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:28.369377 | orchestrator | 2026-02-14 03:31:28.369390 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-14 03:31:28.369402 | orchestrator | Saturday 14 February 2026 03:30:37 +0000 (0:00:01.883) 0:03:59.384 ***** 2026-02-14 03:31:28.369415 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:28.369428 | orchestrator | 2026-02-14 03:31:28.369440 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-14 03:31:28.369452 | orchestrator | Saturday 14 February 2026 03:30:38 +0000 (0:00:00.788) 0:04:00.173 ***** 2026-02-14 03:31:28.369470 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-14 03:31:28.369481 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:28.369493 | orchestrator | 2026-02-14 03:31:28.369504 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-14 03:31:28.369515 | orchestrator | Saturday 14 February 2026 03:31:00 +0000 (0:00:21.902) 0:04:22.075 ***** 2026-02-14 03:31:28.369526 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:28.369538 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:28.369548 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:28.369559 | orchestrator | 2026-02-14 03:31:28.369570 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-14 03:31:28.369581 | orchestrator | Saturday 14 February 2026 03:31:09 +0000 (0:00:09.292) 0:04:31.367 ***** 2026-02-14 03:31:28.369592 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.369639 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.369651 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.369670 | orchestrator | 2026-02-14 03:31:28.369682 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-14 03:31:28.369693 | orchestrator | Saturday 14 February 2026 03:31:10 +0000 (0:00:00.307) 0:04:31.675 ***** 2026-02-14 03:31:28.369723 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-14 03:31:28.369737 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-14 03:31:28.369751 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-14 03:31:28.369764 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-14 03:31:28.369775 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-14 03:31:28.369788 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a728306293995fed95be1b27684a937c03fdc93'}])  2026-02-14 03:31:28.369801 | orchestrator | 2026-02-14 03:31:28.369812 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:31:28.369823 | orchestrator | Saturday 14 February 2026 03:31:24 +0000 (0:00:14.653) 0:04:46.328 ***** 2026-02-14 03:31:28.369834 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.369845 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.369856 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.369867 | orchestrator | 2026-02-14 03:31:28.369878 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-14 03:31:28.369889 | orchestrator | Saturday 14 February 2026 03:31:25 +0000 (0:00:00.355) 0:04:46.684 ***** 2026-02-14 03:31:28.369900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:28.369911 | orchestrator | 2026-02-14 03:31:28.369922 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-14 03:31:28.369933 | orchestrator | Saturday 14 February 2026 03:31:25 +0000 (0:00:00.776) 0:04:47.461 ***** 2026-02-14 03:31:28.369944 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:28.369954 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:28.369966 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:28.369976 | orchestrator | 2026-02-14 03:31:28.369987 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-14 03:31:28.370005 | orchestrator | Saturday 14 February 2026 03:31:26 +0000 (0:00:00.348) 0:04:47.809 ***** 2026-02-14 03:31:28.370067 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.370087 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:28.370107 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:28.370127 | orchestrator | 2026-02-14 03:31:28.370147 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-14 03:31:28.370166 | orchestrator | Saturday 14 February 2026 03:31:26 +0000 (0:00:00.332) 0:04:48.142 ***** 2026-02-14 03:31:28.370185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 03:31:28.370200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 03:31:28.370211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 03:31:28.370222 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:28.370233 | orchestrator | 2026-02-14 03:31:28.370243 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-14 03:31:28.370254 | orchestrator | Saturday 14 February 2026 03:31:27 +0000 (0:00:00.923) 0:04:49.065 ***** 2026-02-14 03:31:28.370265 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:28.370276 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:28.370287 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:28.370297 | orchestrator | 2026-02-14 03:31:28.370308 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-14 03:31:28.370319 | orchestrator | 2026-02-14 03:31:28.370340 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:31:54.837051 | orchestrator | Saturday 14 February 2026 03:31:28 +0000 (0:00:00.822) 0:04:49.887 ***** 2026-02-14 03:31:54.837163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:54.837179 | orchestrator | 2026-02-14 03:31:54.837192 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:31:54.837204 | orchestrator | Saturday 14 February 2026 03:31:28 +0000 (0:00:00.541) 0:04:50.429 ***** 2026-02-14 03:31:54.837215 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:54.837227 | orchestrator | 2026-02-14 03:31:54.837237 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:31:54.837248 | orchestrator | Saturday 14 February 2026 03:31:29 +0000 (0:00:00.774) 0:04:51.203 ***** 2026-02-14 03:31:54.837259 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.837271 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.837282 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.837293 | orchestrator | 2026-02-14 03:31:54.837304 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:31:54.837319 | orchestrator | Saturday 14 February 2026 03:31:30 +0000 (0:00:00.714) 0:04:51.917 ***** 2026-02-14 03:31:54.837337 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.837356 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.837373 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.837392 | orchestrator | 2026-02-14 03:31:54.837411 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:31:54.837430 | orchestrator | Saturday 14 February 2026 03:31:30 +0000 (0:00:00.304) 0:04:52.222 ***** 2026-02-14 03:31:54.837449 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.837466 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.837485 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.837496 | orchestrator | 2026-02-14 03:31:54.837507 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:31:54.837518 | orchestrator | Saturday 14 February 2026 03:31:31 +0000 (0:00:00.537) 0:04:52.759 ***** 2026-02-14 03:31:54.837529 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.837582 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.837622 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.837635 | orchestrator | 2026-02-14 03:31:54.837649 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:31:54.837662 | orchestrator | Saturday 14 February 2026 03:31:31 +0000 (0:00:00.330) 0:04:53.090 ***** 2026-02-14 03:31:54.837674 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.837687 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.837699 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.837711 | orchestrator | 2026-02-14 03:31:54.837724 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:31:54.837737 | orchestrator | Saturday 14 February 2026 03:31:32 +0000 (0:00:00.693) 0:04:53.783 ***** 2026-02-14 03:31:54.837756 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.837774 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.837792 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.837811 | orchestrator | 2026-02-14 03:31:54.837830 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:31:54.837849 | orchestrator | Saturday 14 February 2026 03:31:32 +0000 (0:00:00.329) 0:04:54.113 ***** 2026-02-14 03:31:54.837867 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.837886 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.837906 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.837926 | orchestrator | 2026-02-14 03:31:54.837947 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:31:54.837967 | orchestrator | Saturday 14 February 2026 03:31:33 +0000 (0:00:00.559) 0:04:54.672 ***** 2026-02-14 03:31:54.837982 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.837993 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.838004 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.838014 | orchestrator | 2026-02-14 03:31:54.838086 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:31:54.838098 | orchestrator | Saturday 14 February 2026 03:31:33 +0000 (0:00:00.756) 0:04:55.428 ***** 2026-02-14 03:31:54.838108 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.838119 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.838130 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.838141 | orchestrator | 2026-02-14 03:31:54.838153 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:31:54.838164 | orchestrator | Saturday 14 February 2026 03:31:34 +0000 (0:00:00.762) 0:04:56.191 ***** 2026-02-14 03:31:54.838224 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838251 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838291 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838313 | orchestrator | 2026-02-14 03:31:54.838333 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:31:54.838351 | orchestrator | Saturday 14 February 2026 03:31:34 +0000 (0:00:00.316) 0:04:56.507 ***** 2026-02-14 03:31:54.838368 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.838382 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.838401 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.838417 | orchestrator | 2026-02-14 03:31:54.838434 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:31:54.838453 | orchestrator | Saturday 14 February 2026 03:31:35 +0000 (0:00:00.596) 0:04:57.104 ***** 2026-02-14 03:31:54.838473 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838485 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838496 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838507 | orchestrator | 2026-02-14 03:31:54.838517 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:31:54.838528 | orchestrator | Saturday 14 February 2026 03:31:35 +0000 (0:00:00.325) 0:04:57.430 ***** 2026-02-14 03:31:54.838568 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838581 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838592 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838603 | orchestrator | 2026-02-14 03:31:54.838647 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:31:54.838659 | orchestrator | Saturday 14 February 2026 03:31:36 +0000 (0:00:00.355) 0:04:57.785 ***** 2026-02-14 03:31:54.838670 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838681 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838691 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838702 | orchestrator | 2026-02-14 03:31:54.838713 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:31:54.838724 | orchestrator | Saturday 14 February 2026 03:31:36 +0000 (0:00:00.322) 0:04:58.108 ***** 2026-02-14 03:31:54.838734 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838745 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838756 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838767 | orchestrator | 2026-02-14 03:31:54.838778 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:31:54.838788 | orchestrator | Saturday 14 February 2026 03:31:37 +0000 (0:00:00.575) 0:04:58.683 ***** 2026-02-14 03:31:54.838799 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.838810 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.838820 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.838831 | orchestrator | 2026-02-14 03:31:54.838842 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:31:54.838853 | orchestrator | Saturday 14 February 2026 03:31:37 +0000 (0:00:00.338) 0:04:59.021 ***** 2026-02-14 03:31:54.838863 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.838874 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.838885 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.838896 | orchestrator | 2026-02-14 03:31:54.838906 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:31:54.838917 | orchestrator | Saturday 14 February 2026 03:31:37 +0000 (0:00:00.345) 0:04:59.367 ***** 2026-02-14 03:31:54.838932 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.838958 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.838980 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.838998 | orchestrator | 2026-02-14 03:31:54.839016 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:31:54.839034 | orchestrator | Saturday 14 February 2026 03:31:38 +0000 (0:00:00.345) 0:04:59.712 ***** 2026-02-14 03:31:54.839051 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.839068 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.839085 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.839105 | orchestrator | 2026-02-14 03:31:54.839123 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 03:31:54.839141 | orchestrator | Saturday 14 February 2026 03:31:38 +0000 (0:00:00.811) 0:05:00.523 ***** 2026-02-14 03:31:54.839160 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 03:31:54.839180 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:31:54.839200 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:31:54.839219 | orchestrator | 2026-02-14 03:31:54.839230 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 03:31:54.839241 | orchestrator | Saturday 14 February 2026 03:31:39 +0000 (0:00:00.656) 0:05:01.180 ***** 2026-02-14 03:31:54.839251 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:31:54.839263 | orchestrator | 2026-02-14 03:31:54.839274 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-14 03:31:54.839284 | orchestrator | Saturday 14 February 2026 03:31:40 +0000 (0:00:00.798) 0:05:01.978 ***** 2026-02-14 03:31:54.839295 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:31:54.839306 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:31:54.839317 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:31:54.839327 | orchestrator | 2026-02-14 03:31:54.839338 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-14 03:31:54.839359 | orchestrator | Saturday 14 February 2026 03:31:41 +0000 (0:00:00.705) 0:05:02.684 ***** 2026-02-14 03:31:54.839370 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:31:54.839381 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:31:54.839391 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:31:54.839402 | orchestrator | 2026-02-14 03:31:54.839413 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-14 03:31:54.839424 | orchestrator | Saturday 14 February 2026 03:31:41 +0000 (0:00:00.360) 0:05:03.045 ***** 2026-02-14 03:31:54.839440 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:31:54.839459 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:31:54.839478 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:31:54.839495 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-14 03:31:54.839514 | orchestrator | 2026-02-14 03:31:54.839581 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-14 03:31:54.839596 | orchestrator | Saturday 14 February 2026 03:31:51 +0000 (0:00:10.412) 0:05:13.457 ***** 2026-02-14 03:31:54.839607 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:31:54.839618 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:31:54.839629 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:31:54.839640 | orchestrator | 2026-02-14 03:31:54.839651 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-14 03:31:54.839661 | orchestrator | Saturday 14 February 2026 03:31:52 +0000 (0:00:00.362) 0:05:13.819 ***** 2026-02-14 03:31:54.839672 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-14 03:31:54.839682 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 03:31:54.839693 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 03:31:54.839704 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 03:31:54.839714 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:31:54.839725 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:31:54.839736 | orchestrator | 2026-02-14 03:31:54.839746 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-14 03:31:54.839769 | orchestrator | Saturday 14 February 2026 03:31:54 +0000 (0:00:02.533) 0:05:16.352 ***** 2026-02-14 03:32:49.832229 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-14 03:32:49.832347 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 03:32:49.832363 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 03:32:49.832375 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:32:49.832387 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-14 03:32:49.832398 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-14 03:32:49.832409 | orchestrator | 2026-02-14 03:32:49.832486 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-14 03:32:49.832503 | orchestrator | Saturday 14 February 2026 03:31:56 +0000 (0:00:01.233) 0:05:17.585 ***** 2026-02-14 03:32:49.832514 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:32:49.832525 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:32:49.832536 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:32:49.832547 | orchestrator | 2026-02-14 03:32:49.832559 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-14 03:32:49.832570 | orchestrator | Saturday 14 February 2026 03:31:56 +0000 (0:00:00.693) 0:05:18.279 ***** 2026-02-14 03:32:49.832581 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.832593 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:32:49.832604 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:32:49.832615 | orchestrator | 2026-02-14 03:32:49.832626 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 03:32:49.832637 | orchestrator | Saturday 14 February 2026 03:31:57 +0000 (0:00:00.315) 0:05:18.594 ***** 2026-02-14 03:32:49.832675 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.832687 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:32:49.832698 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:32:49.832709 | orchestrator | 2026-02-14 03:32:49.832720 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 03:32:49.832731 | orchestrator | Saturday 14 February 2026 03:31:57 +0000 (0:00:00.582) 0:05:19.177 ***** 2026-02-14 03:32:49.832742 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:32:49.832753 | orchestrator | 2026-02-14 03:32:49.832766 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-14 03:32:49.832780 | orchestrator | Saturday 14 February 2026 03:31:58 +0000 (0:00:00.536) 0:05:19.714 ***** 2026-02-14 03:32:49.832792 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.832805 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:32:49.832817 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:32:49.832829 | orchestrator | 2026-02-14 03:32:49.832841 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-14 03:32:49.832854 | orchestrator | Saturday 14 February 2026 03:31:58 +0000 (0:00:00.331) 0:05:20.045 ***** 2026-02-14 03:32:49.832866 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.832878 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:32:49.832891 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:32:49.832903 | orchestrator | 2026-02-14 03:32:49.832915 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-14 03:32:49.832927 | orchestrator | Saturday 14 February 2026 03:31:59 +0000 (0:00:00.606) 0:05:20.652 ***** 2026-02-14 03:32:49.832940 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:32:49.832952 | orchestrator | 2026-02-14 03:32:49.832964 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-14 03:32:49.832976 | orchestrator | Saturday 14 February 2026 03:31:59 +0000 (0:00:00.544) 0:05:21.196 ***** 2026-02-14 03:32:49.832988 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833001 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833013 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833026 | orchestrator | 2026-02-14 03:32:49.833037 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-14 03:32:49.833050 | orchestrator | Saturday 14 February 2026 03:32:00 +0000 (0:00:01.246) 0:05:22.443 ***** 2026-02-14 03:32:49.833062 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833074 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833086 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833098 | orchestrator | 2026-02-14 03:32:49.833110 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-14 03:32:49.833122 | orchestrator | Saturday 14 February 2026 03:32:02 +0000 (0:00:01.380) 0:05:23.824 ***** 2026-02-14 03:32:49.833133 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833143 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833155 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833166 | orchestrator | 2026-02-14 03:32:49.833176 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-14 03:32:49.833204 | orchestrator | Saturday 14 February 2026 03:32:04 +0000 (0:00:01.761) 0:05:25.585 ***** 2026-02-14 03:32:49.833215 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833226 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833237 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833247 | orchestrator | 2026-02-14 03:32:49.833258 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 03:32:49.833269 | orchestrator | Saturday 14 February 2026 03:32:05 +0000 (0:00:01.923) 0:05:27.508 ***** 2026-02-14 03:32:49.833279 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.833290 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:32:49.833301 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-14 03:32:49.833319 | orchestrator | 2026-02-14 03:32:49.833330 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-14 03:32:49.833341 | orchestrator | Saturday 14 February 2026 03:32:06 +0000 (0:00:00.619) 0:05:28.128 ***** 2026-02-14 03:32:49.833352 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-14 03:32:49.833363 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-14 03:32:49.833390 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-14 03:32:49.833402 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-14 03:32:49.833413 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:32:49.833450 | orchestrator | 2026-02-14 03:32:49.833462 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-14 03:32:49.833473 | orchestrator | Saturday 14 February 2026 03:32:30 +0000 (0:00:24.387) 0:05:52.515 ***** 2026-02-14 03:32:49.833484 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:32:49.833495 | orchestrator | 2026-02-14 03:32:49.833505 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-14 03:32:49.833516 | orchestrator | Saturday 14 February 2026 03:32:32 +0000 (0:00:01.236) 0:05:53.752 ***** 2026-02-14 03:32:49.833527 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:32:49.833537 | orchestrator | 2026-02-14 03:32:49.833548 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-14 03:32:49.833559 | orchestrator | Saturday 14 February 2026 03:32:32 +0000 (0:00:00.317) 0:05:54.070 ***** 2026-02-14 03:32:49.833570 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:32:49.833580 | orchestrator | 2026-02-14 03:32:49.833591 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-14 03:32:49.833602 | orchestrator | Saturday 14 February 2026 03:32:32 +0000 (0:00:00.157) 0:05:54.228 ***** 2026-02-14 03:32:49.833613 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-14 03:32:49.833623 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-14 03:32:49.833634 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-14 03:32:49.833645 | orchestrator | 2026-02-14 03:32:49.833655 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-14 03:32:49.833666 | orchestrator | Saturday 14 February 2026 03:32:39 +0000 (0:00:06.327) 0:06:00.555 ***** 2026-02-14 03:32:49.833677 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-14 03:32:49.833688 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-14 03:32:49.833698 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-14 03:32:49.833709 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-14 03:32:49.833720 | orchestrator | 2026-02-14 03:32:49.833730 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:32:49.833741 | orchestrator | Saturday 14 February 2026 03:32:44 +0000 (0:00:05.034) 0:06:05.590 ***** 2026-02-14 03:32:49.833752 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833763 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833774 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833784 | orchestrator | 2026-02-14 03:32:49.833795 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-14 03:32:49.833806 | orchestrator | Saturday 14 February 2026 03:32:44 +0000 (0:00:00.741) 0:06:06.331 ***** 2026-02-14 03:32:49.833817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:32:49.833828 | orchestrator | 2026-02-14 03:32:49.833846 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-14 03:32:49.833857 | orchestrator | Saturday 14 February 2026 03:32:45 +0000 (0:00:00.584) 0:06:06.916 ***** 2026-02-14 03:32:49.833868 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:32:49.833879 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:32:49.833889 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:32:49.833900 | orchestrator | 2026-02-14 03:32:49.833911 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-14 03:32:49.833922 | orchestrator | Saturday 14 February 2026 03:32:45 +0000 (0:00:00.579) 0:06:07.496 ***** 2026-02-14 03:32:49.833933 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:32:49.833943 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:32:49.833954 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:32:49.833965 | orchestrator | 2026-02-14 03:32:49.833975 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-14 03:32:49.833986 | orchestrator | Saturday 14 February 2026 03:32:47 +0000 (0:00:01.187) 0:06:08.683 ***** 2026-02-14 03:32:49.833997 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 03:32:49.834007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 03:32:49.834087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 03:32:49.834101 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:32:49.834111 | orchestrator | 2026-02-14 03:32:49.834122 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-14 03:32:49.834133 | orchestrator | Saturday 14 February 2026 03:32:47 +0000 (0:00:00.633) 0:06:09.317 ***** 2026-02-14 03:32:49.834178 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:32:49.834190 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:32:49.834201 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:32:49.834212 | orchestrator | 2026-02-14 03:32:49.834222 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-14 03:32:49.834233 | orchestrator | 2026-02-14 03:32:49.834244 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:32:49.834255 | orchestrator | Saturday 14 February 2026 03:32:48 +0000 (0:00:00.529) 0:06:09.847 ***** 2026-02-14 03:32:49.834266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:32:49.834278 | orchestrator | 2026-02-14 03:32:49.834289 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:32:49.834299 | orchestrator | Saturday 14 February 2026 03:32:49 +0000 (0:00:00.770) 0:06:10.618 ***** 2026-02-14 03:32:49.834318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:33:06.421588 | orchestrator | 2026-02-14 03:33:06.421720 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:33:06.421751 | orchestrator | Saturday 14 February 2026 03:32:49 +0000 (0:00:00.735) 0:06:11.353 ***** 2026-02-14 03:33:06.421771 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.421793 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.421812 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.421836 | orchestrator | 2026-02-14 03:33:06.421854 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:33:06.421866 | orchestrator | Saturday 14 February 2026 03:32:50 +0000 (0:00:00.324) 0:06:11.678 ***** 2026-02-14 03:33:06.421877 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.421889 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.421900 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.421911 | orchestrator | 2026-02-14 03:33:06.421922 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:33:06.421933 | orchestrator | Saturday 14 February 2026 03:32:50 +0000 (0:00:00.723) 0:06:12.402 ***** 2026-02-14 03:33:06.421945 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.421955 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.421991 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422003 | orchestrator | 2026-02-14 03:33:06.422065 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:33:06.422080 | orchestrator | Saturday 14 February 2026 03:32:51 +0000 (0:00:00.694) 0:06:13.096 ***** 2026-02-14 03:33:06.422092 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422102 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422114 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422126 | orchestrator | 2026-02-14 03:33:06.422139 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:33:06.422153 | orchestrator | Saturday 14 February 2026 03:32:52 +0000 (0:00:00.943) 0:06:14.040 ***** 2026-02-14 03:33:06.422166 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422179 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422191 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422204 | orchestrator | 2026-02-14 03:33:06.422217 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:33:06.422229 | orchestrator | Saturday 14 February 2026 03:32:52 +0000 (0:00:00.351) 0:06:14.391 ***** 2026-02-14 03:33:06.422242 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422255 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422268 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422281 | orchestrator | 2026-02-14 03:33:06.422294 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:33:06.422306 | orchestrator | Saturday 14 February 2026 03:32:53 +0000 (0:00:00.322) 0:06:14.713 ***** 2026-02-14 03:33:06.422319 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422332 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422345 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422358 | orchestrator | 2026-02-14 03:33:06.422370 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:33:06.422383 | orchestrator | Saturday 14 February 2026 03:32:53 +0000 (0:00:00.310) 0:06:15.024 ***** 2026-02-14 03:33:06.422421 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422435 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422448 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422461 | orchestrator | 2026-02-14 03:33:06.422473 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:33:06.422484 | orchestrator | Saturday 14 February 2026 03:32:54 +0000 (0:00:00.958) 0:06:15.983 ***** 2026-02-14 03:33:06.422495 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422506 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422516 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422527 | orchestrator | 2026-02-14 03:33:06.422538 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:33:06.422549 | orchestrator | Saturday 14 February 2026 03:32:55 +0000 (0:00:00.695) 0:06:16.678 ***** 2026-02-14 03:33:06.422561 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422572 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422583 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422594 | orchestrator | 2026-02-14 03:33:06.422605 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:33:06.422616 | orchestrator | Saturday 14 February 2026 03:32:55 +0000 (0:00:00.308) 0:06:16.986 ***** 2026-02-14 03:33:06.422627 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422638 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422649 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422660 | orchestrator | 2026-02-14 03:33:06.422671 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:33:06.422697 | orchestrator | Saturday 14 February 2026 03:32:55 +0000 (0:00:00.329) 0:06:17.316 ***** 2026-02-14 03:33:06.422708 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422719 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422730 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422741 | orchestrator | 2026-02-14 03:33:06.422761 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:33:06.422772 | orchestrator | Saturday 14 February 2026 03:32:56 +0000 (0:00:00.591) 0:06:17.907 ***** 2026-02-14 03:33:06.422783 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422794 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422805 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422816 | orchestrator | 2026-02-14 03:33:06.422827 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:33:06.422838 | orchestrator | Saturday 14 February 2026 03:32:56 +0000 (0:00:00.344) 0:06:18.252 ***** 2026-02-14 03:33:06.422849 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.422860 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.422871 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.422881 | orchestrator | 2026-02-14 03:33:06.422892 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:33:06.422903 | orchestrator | Saturday 14 February 2026 03:32:57 +0000 (0:00:00.351) 0:06:18.604 ***** 2026-02-14 03:33:06.422915 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.422926 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.422937 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.422948 | orchestrator | 2026-02-14 03:33:06.422959 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:33:06.422989 | orchestrator | Saturday 14 February 2026 03:32:57 +0000 (0:00:00.331) 0:06:18.936 ***** 2026-02-14 03:33:06.423001 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.423012 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.423023 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.423034 | orchestrator | 2026-02-14 03:33:06.423045 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:33:06.423056 | orchestrator | Saturday 14 February 2026 03:32:57 +0000 (0:00:00.550) 0:06:19.486 ***** 2026-02-14 03:33:06.423067 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.423078 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.423089 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.423100 | orchestrator | 2026-02-14 03:33:06.423111 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:33:06.423122 | orchestrator | Saturday 14 February 2026 03:32:58 +0000 (0:00:00.329) 0:06:19.816 ***** 2026-02-14 03:33:06.423133 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.423144 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.423155 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.423166 | orchestrator | 2026-02-14 03:33:06.423177 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:33:06.423188 | orchestrator | Saturday 14 February 2026 03:32:58 +0000 (0:00:00.368) 0:06:20.184 ***** 2026-02-14 03:33:06.423199 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.423210 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.423221 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.423232 | orchestrator | 2026-02-14 03:33:06.423243 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-14 03:33:06.423254 | orchestrator | Saturday 14 February 2026 03:32:59 +0000 (0:00:00.818) 0:06:21.003 ***** 2026-02-14 03:33:06.423265 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.423276 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.423286 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.423297 | orchestrator | 2026-02-14 03:33:06.423308 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-14 03:33:06.423319 | orchestrator | Saturday 14 February 2026 03:32:59 +0000 (0:00:00.348) 0:06:21.352 ***** 2026-02-14 03:33:06.423330 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:33:06.423342 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:33:06.423353 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:33:06.423371 | orchestrator | 2026-02-14 03:33:06.423382 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-14 03:33:06.423415 | orchestrator | Saturday 14 February 2026 03:33:00 +0000 (0:00:00.660) 0:06:22.012 ***** 2026-02-14 03:33:06.423428 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:33:06.423439 | orchestrator | 2026-02-14 03:33:06.423450 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-14 03:33:06.423461 | orchestrator | Saturday 14 February 2026 03:33:00 +0000 (0:00:00.518) 0:06:22.530 ***** 2026-02-14 03:33:06.423472 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.423483 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.423494 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.423504 | orchestrator | 2026-02-14 03:33:06.423515 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-14 03:33:06.423527 | orchestrator | Saturday 14 February 2026 03:33:01 +0000 (0:00:00.594) 0:06:23.125 ***** 2026-02-14 03:33:06.423538 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:33:06.423549 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:33:06.423560 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:33:06.423570 | orchestrator | 2026-02-14 03:33:06.423582 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-14 03:33:06.423593 | orchestrator | Saturday 14 February 2026 03:33:01 +0000 (0:00:00.320) 0:06:23.445 ***** 2026-02-14 03:33:06.423604 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.423615 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.423626 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.423637 | orchestrator | 2026-02-14 03:33:06.423648 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-14 03:33:06.423659 | orchestrator | Saturday 14 February 2026 03:33:02 +0000 (0:00:00.599) 0:06:24.044 ***** 2026-02-14 03:33:06.423670 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:33:06.423681 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:33:06.423692 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:33:06.423702 | orchestrator | 2026-02-14 03:33:06.423724 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-14 03:33:06.423744 | orchestrator | Saturday 14 February 2026 03:33:03 +0000 (0:00:00.918) 0:06:24.963 ***** 2026-02-14 03:33:06.423762 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 03:33:06.423780 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 03:33:06.423797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 03:33:06.423814 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 03:33:06.423831 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 03:33:06.423846 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 03:33:06.423861 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 03:33:06.423877 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 03:33:06.423893 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 03:33:06.423922 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 03:34:15.358114 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 03:34:15.358228 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 03:34:15.358243 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 03:34:15.358255 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 03:34:15.358359 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 03:34:15.358375 | orchestrator | 2026-02-14 03:34:15.358387 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-14 03:34:15.358398 | orchestrator | Saturday 14 February 2026 03:33:06 +0000 (0:00:02.980) 0:06:27.943 ***** 2026-02-14 03:34:15.358409 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:15.358421 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:15.358432 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:15.358443 | orchestrator | 2026-02-14 03:34:15.358454 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-14 03:34:15.358465 | orchestrator | Saturday 14 February 2026 03:33:06 +0000 (0:00:00.310) 0:06:28.254 ***** 2026-02-14 03:34:15.358476 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:34:15.358487 | orchestrator | 2026-02-14 03:34:15.358498 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-14 03:34:15.358509 | orchestrator | Saturday 14 February 2026 03:33:07 +0000 (0:00:00.803) 0:06:29.058 ***** 2026-02-14 03:34:15.358520 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 03:34:15.358531 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 03:34:15.358542 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 03:34:15.358553 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-14 03:34:15.358564 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-14 03:34:15.358582 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-14 03:34:15.358601 | orchestrator | 2026-02-14 03:34:15.358619 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-14 03:34:15.358637 | orchestrator | Saturday 14 February 2026 03:33:08 +0000 (0:00:00.950) 0:06:30.008 ***** 2026-02-14 03:34:15.358655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:34:15.358674 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:34:15.358691 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:34:15.358710 | orchestrator | 2026-02-14 03:34:15.358729 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-14 03:34:15.358749 | orchestrator | Saturday 14 February 2026 03:33:10 +0000 (0:00:01.950) 0:06:31.959 ***** 2026-02-14 03:34:15.358768 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:34:15.358789 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:34:15.358809 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:34:15.358828 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:34:15.358845 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 03:34:15.358865 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:34:15.358885 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:34:15.358905 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 03:34:15.358921 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:34:15.358933 | orchestrator | 2026-02-14 03:34:15.358944 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-14 03:34:15.358955 | orchestrator | Saturday 14 February 2026 03:33:11 +0000 (0:00:01.165) 0:06:33.124 ***** 2026-02-14 03:34:15.358966 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:34:15.358977 | orchestrator | 2026-02-14 03:34:15.358988 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-14 03:34:15.358999 | orchestrator | Saturday 14 February 2026 03:33:13 +0000 (0:00:02.021) 0:06:35.145 ***** 2026-02-14 03:34:15.359026 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:34:15.359038 | orchestrator | 2026-02-14 03:34:15.359059 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-14 03:34:15.359070 | orchestrator | Saturday 14 February 2026 03:33:14 +0000 (0:00:00.803) 0:06:35.949 ***** 2026-02-14 03:34:15.359082 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}) 2026-02-14 03:34:15.359095 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}) 2026-02-14 03:34:15.359106 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}) 2026-02-14 03:34:15.359117 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}) 2026-02-14 03:34:15.359128 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}) 2026-02-14 03:34:15.359159 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}) 2026-02-14 03:34:15.359171 | orchestrator | 2026-02-14 03:34:15.359182 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-14 03:34:15.359193 | orchestrator | Saturday 14 February 2026 03:33:58 +0000 (0:00:43.994) 0:07:19.944 ***** 2026-02-14 03:34:15.359204 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:15.359215 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:15.359226 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:15.359237 | orchestrator | 2026-02-14 03:34:15.359248 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-14 03:34:15.359258 | orchestrator | Saturday 14 February 2026 03:33:58 +0000 (0:00:00.325) 0:07:20.269 ***** 2026-02-14 03:34:15.359269 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:34:15.359280 | orchestrator | 2026-02-14 03:34:15.359319 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-14 03:34:15.359330 | orchestrator | Saturday 14 February 2026 03:33:59 +0000 (0:00:00.855) 0:07:21.124 ***** 2026-02-14 03:34:15.359340 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:34:15.359351 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:34:15.359362 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:34:15.359373 | orchestrator | 2026-02-14 03:34:15.359384 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-14 03:34:15.359394 | orchestrator | Saturday 14 February 2026 03:34:00 +0000 (0:00:00.649) 0:07:21.774 ***** 2026-02-14 03:34:15.359406 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:34:15.359417 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:34:15.359428 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:34:15.359438 | orchestrator | 2026-02-14 03:34:15.359449 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-14 03:34:15.359460 | orchestrator | Saturday 14 February 2026 03:34:02 +0000 (0:00:02.356) 0:07:24.130 ***** 2026-02-14 03:34:15.359471 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:34:15.359482 | orchestrator | 2026-02-14 03:34:15.359493 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-14 03:34:15.359503 | orchestrator | Saturday 14 February 2026 03:34:03 +0000 (0:00:00.759) 0:07:24.890 ***** 2026-02-14 03:34:15.359514 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:34:15.359525 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:34:15.359536 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:34:15.359547 | orchestrator | 2026-02-14 03:34:15.359558 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-14 03:34:15.359569 | orchestrator | Saturday 14 February 2026 03:34:04 +0000 (0:00:01.163) 0:07:26.054 ***** 2026-02-14 03:34:15.359587 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:34:15.359598 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:34:15.359609 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:34:15.359620 | orchestrator | 2026-02-14 03:34:15.359630 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-14 03:34:15.359641 | orchestrator | Saturday 14 February 2026 03:34:05 +0000 (0:00:01.083) 0:07:27.137 ***** 2026-02-14 03:34:15.359652 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:34:15.359663 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:34:15.359673 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:34:15.359684 | orchestrator | 2026-02-14 03:34:15.359695 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-14 03:34:15.359706 | orchestrator | Saturday 14 February 2026 03:34:07 +0000 (0:00:01.940) 0:07:29.077 ***** 2026-02-14 03:34:15.359717 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:15.359728 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:15.359738 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:15.359749 | orchestrator | 2026-02-14 03:34:15.359760 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-14 03:34:15.359771 | orchestrator | Saturday 14 February 2026 03:34:07 +0000 (0:00:00.353) 0:07:29.431 ***** 2026-02-14 03:34:15.359782 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:15.359793 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:15.359804 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:15.359814 | orchestrator | 2026-02-14 03:34:15.359825 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-14 03:34:15.359836 | orchestrator | Saturday 14 February 2026 03:34:08 +0000 (0:00:00.331) 0:07:29.762 ***** 2026-02-14 03:34:15.359847 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-14 03:34:15.359863 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-14 03:34:15.359874 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-14 03:34:15.359885 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 03:34:15.359896 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-14 03:34:15.359907 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-14 03:34:15.359918 | orchestrator | 2026-02-14 03:34:15.359929 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-14 03:34:15.359940 | orchestrator | Saturday 14 February 2026 03:34:09 +0000 (0:00:01.008) 0:07:30.771 ***** 2026-02-14 03:34:15.359950 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-14 03:34:15.359962 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-14 03:34:15.359972 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-14 03:34:15.359983 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-14 03:34:15.359994 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-14 03:34:15.360005 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-14 03:34:15.360016 | orchestrator | 2026-02-14 03:34:15.360027 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-14 03:34:15.360038 | orchestrator | Saturday 14 February 2026 03:34:11 +0000 (0:00:02.412) 0:07:33.184 ***** 2026-02-14 03:34:15.360048 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-14 03:34:15.360059 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-14 03:34:15.360070 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-14 03:34:15.360081 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-14 03:34:15.360099 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-14 03:34:47.045412 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-14 03:34:47.045524 | orchestrator | 2026-02-14 03:34:47.045541 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-14 03:34:47.045554 | orchestrator | Saturday 14 February 2026 03:34:15 +0000 (0:00:03.690) 0:07:36.874 ***** 2026-02-14 03:34:47.045565 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.045576 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.045612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:34:47.045624 | orchestrator | 2026-02-14 03:34:47.045635 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-14 03:34:47.045646 | orchestrator | Saturday 14 February 2026 03:34:18 +0000 (0:00:02.695) 0:07:39.570 ***** 2026-02-14 03:34:47.045656 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.045667 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.045678 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-14 03:34:47.045690 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:34:47.045701 | orchestrator | 2026-02-14 03:34:47.045711 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-14 03:34:47.045722 | orchestrator | Saturday 14 February 2026 03:34:30 +0000 (0:00:12.363) 0:07:51.933 ***** 2026-02-14 03:34:47.045733 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.045743 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.045754 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.045765 | orchestrator | 2026-02-14 03:34:47.045776 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:34:47.045787 | orchestrator | Saturday 14 February 2026 03:34:31 +0000 (0:00:01.188) 0:07:53.121 ***** 2026-02-14 03:34:47.045798 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.045808 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.045819 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.045829 | orchestrator | 2026-02-14 03:34:47.045840 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-14 03:34:47.045851 | orchestrator | Saturday 14 February 2026 03:34:31 +0000 (0:00:00.349) 0:07:53.471 ***** 2026-02-14 03:34:47.045862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:34:47.045873 | orchestrator | 2026-02-14 03:34:47.045884 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-14 03:34:47.045894 | orchestrator | Saturday 14 February 2026 03:34:32 +0000 (0:00:00.826) 0:07:54.297 ***** 2026-02-14 03:34:47.045907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:34:47.045920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:34:47.045932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:34:47.045946 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.045964 | orchestrator | 2026-02-14 03:34:47.045987 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-14 03:34:47.046015 | orchestrator | Saturday 14 February 2026 03:34:33 +0000 (0:00:00.409) 0:07:54.707 ***** 2026-02-14 03:34:47.046106 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046124 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.046142 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.046161 | orchestrator | 2026-02-14 03:34:47.046179 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-14 03:34:47.046197 | orchestrator | Saturday 14 February 2026 03:34:33 +0000 (0:00:00.348) 0:07:55.056 ***** 2026-02-14 03:34:47.046215 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046234 | orchestrator | 2026-02-14 03:34:47.046282 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-14 03:34:47.046301 | orchestrator | Saturday 14 February 2026 03:34:33 +0000 (0:00:00.233) 0:07:55.290 ***** 2026-02-14 03:34:47.046318 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046337 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.046356 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.046373 | orchestrator | 2026-02-14 03:34:47.046392 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-14 03:34:47.046409 | orchestrator | Saturday 14 February 2026 03:34:34 +0000 (0:00:00.576) 0:07:55.866 ***** 2026-02-14 03:34:47.046443 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046462 | orchestrator | 2026-02-14 03:34:47.046498 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-14 03:34:47.046518 | orchestrator | Saturday 14 February 2026 03:34:34 +0000 (0:00:00.245) 0:07:56.112 ***** 2026-02-14 03:34:47.046538 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046556 | orchestrator | 2026-02-14 03:34:47.046575 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-14 03:34:47.046594 | orchestrator | Saturday 14 February 2026 03:34:34 +0000 (0:00:00.239) 0:07:56.351 ***** 2026-02-14 03:34:47.046612 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046630 | orchestrator | 2026-02-14 03:34:47.046649 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-14 03:34:47.046668 | orchestrator | Saturday 14 February 2026 03:34:34 +0000 (0:00:00.129) 0:07:56.481 ***** 2026-02-14 03:34:47.046686 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046705 | orchestrator | 2026-02-14 03:34:47.046724 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-14 03:34:47.046742 | orchestrator | Saturday 14 February 2026 03:34:35 +0000 (0:00:00.247) 0:07:56.728 ***** 2026-02-14 03:34:47.046762 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046780 | orchestrator | 2026-02-14 03:34:47.046799 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-14 03:34:47.046817 | orchestrator | Saturday 14 February 2026 03:34:35 +0000 (0:00:00.229) 0:07:56.958 ***** 2026-02-14 03:34:47.046836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:34:47.046854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:34:47.046900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:34:47.046920 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.046938 | orchestrator | 2026-02-14 03:34:47.046956 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-14 03:34:47.046973 | orchestrator | Saturday 14 February 2026 03:34:35 +0000 (0:00:00.456) 0:07:57.414 ***** 2026-02-14 03:34:47.046991 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.047009 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.047025 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.047044 | orchestrator | 2026-02-14 03:34:47.047063 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-14 03:34:47.047081 | orchestrator | Saturday 14 February 2026 03:34:36 +0000 (0:00:00.343) 0:07:57.757 ***** 2026-02-14 03:34:47.047098 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.047115 | orchestrator | 2026-02-14 03:34:47.047132 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-14 03:34:47.047149 | orchestrator | Saturday 14 February 2026 03:34:36 +0000 (0:00:00.254) 0:07:58.012 ***** 2026-02-14 03:34:47.047167 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.047183 | orchestrator | 2026-02-14 03:34:47.047198 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-14 03:34:47.047215 | orchestrator | 2026-02-14 03:34:47.047232 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:34:47.047293 | orchestrator | Saturday 14 February 2026 03:34:37 +0000 (0:00:01.336) 0:07:59.348 ***** 2026-02-14 03:34:47.047314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:34:47.047334 | orchestrator | 2026-02-14 03:34:47.047352 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:34:47.047369 | orchestrator | Saturday 14 February 2026 03:34:39 +0000 (0:00:01.291) 0:08:00.640 ***** 2026-02-14 03:34:47.047387 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:34:47.047423 | orchestrator | 2026-02-14 03:34:47.047441 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:34:47.047460 | orchestrator | Saturday 14 February 2026 03:34:40 +0000 (0:00:01.350) 0:08:01.990 ***** 2026-02-14 03:34:47.047478 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.047496 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.047514 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.047533 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:34:47.047553 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:34:47.047571 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:34:47.047588 | orchestrator | 2026-02-14 03:34:47.047607 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:34:47.047626 | orchestrator | Saturday 14 February 2026 03:34:41 +0000 (0:00:01.254) 0:08:03.245 ***** 2026-02-14 03:34:47.047643 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:34:47.047661 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:34:47.047681 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:34:47.047698 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:34:47.047716 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:34:47.047734 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:34:47.047752 | orchestrator | 2026-02-14 03:34:47.047769 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:34:47.047788 | orchestrator | Saturday 14 February 2026 03:34:42 +0000 (0:00:00.747) 0:08:03.993 ***** 2026-02-14 03:34:47.047805 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:34:47.047823 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:34:47.047841 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:34:47.047859 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:34:47.047877 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:34:47.047894 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:34:47.047912 | orchestrator | 2026-02-14 03:34:47.047930 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:34:47.047946 | orchestrator | Saturday 14 February 2026 03:34:43 +0000 (0:00:00.955) 0:08:04.948 ***** 2026-02-14 03:34:47.047963 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:34:47.047982 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:34:47.048000 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:34:47.048019 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:34:47.048038 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:34:47.048055 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:34:47.048073 | orchestrator | 2026-02-14 03:34:47.048104 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:34:47.048124 | orchestrator | Saturday 14 February 2026 03:34:44 +0000 (0:00:00.711) 0:08:05.660 ***** 2026-02-14 03:34:47.048143 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.048162 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.048180 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.048198 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:34:47.048215 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:34:47.048233 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:34:47.048326 | orchestrator | 2026-02-14 03:34:47.048349 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:34:47.048368 | orchestrator | Saturday 14 February 2026 03:34:45 +0000 (0:00:01.294) 0:08:06.954 ***** 2026-02-14 03:34:47.048387 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.048405 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.048422 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.048440 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:34:47.048459 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:34:47.048474 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:34:47.048494 | orchestrator | 2026-02-14 03:34:47.048513 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:34:47.048532 | orchestrator | Saturday 14 February 2026 03:34:46 +0000 (0:00:00.652) 0:08:07.606 ***** 2026-02-14 03:34:47.048551 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:34:47.048584 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:34:47.048602 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:34:47.048620 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:34:47.048660 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.555023 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.555159 | orchestrator | 2026-02-14 03:35:18.555180 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:35:18.555194 | orchestrator | Saturday 14 February 2026 03:34:47 +0000 (0:00:00.963) 0:08:08.569 ***** 2026-02-14 03:35:18.555206 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.555245 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.555256 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.555267 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.555278 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.555289 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.555300 | orchestrator | 2026-02-14 03:35:18.555311 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:35:18.555322 | orchestrator | Saturday 14 February 2026 03:34:48 +0000 (0:00:01.129) 0:08:09.699 ***** 2026-02-14 03:35:18.555333 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.555344 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.555355 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.555366 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.555376 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.555387 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.555398 | orchestrator | 2026-02-14 03:35:18.555409 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:35:18.555420 | orchestrator | Saturday 14 February 2026 03:34:49 +0000 (0:00:01.442) 0:08:11.141 ***** 2026-02-14 03:35:18.555431 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:18.555444 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:18.555455 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:18.555466 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.555477 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.555488 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.555499 | orchestrator | 2026-02-14 03:35:18.555510 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:35:18.555521 | orchestrator | Saturday 14 February 2026 03:34:50 +0000 (0:00:00.656) 0:08:11.798 ***** 2026-02-14 03:35:18.555532 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:18.555546 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:18.555558 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:18.555571 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.555584 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.555596 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.555609 | orchestrator | 2026-02-14 03:35:18.555622 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:35:18.555634 | orchestrator | Saturday 14 February 2026 03:34:51 +0000 (0:00:00.888) 0:08:12.686 ***** 2026-02-14 03:35:18.555647 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.555660 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.555672 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.555685 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.555697 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.555710 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.555722 | orchestrator | 2026-02-14 03:35:18.555735 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:35:18.555747 | orchestrator | Saturday 14 February 2026 03:34:51 +0000 (0:00:00.630) 0:08:13.316 ***** 2026-02-14 03:35:18.555760 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.555773 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.555785 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.555796 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.555807 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.555846 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.555858 | orchestrator | 2026-02-14 03:35:18.555869 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:35:18.555880 | orchestrator | Saturday 14 February 2026 03:34:52 +0000 (0:00:00.890) 0:08:14.207 ***** 2026-02-14 03:35:18.555891 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.555901 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.555912 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.555923 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.555934 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.555945 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.555956 | orchestrator | 2026-02-14 03:35:18.555967 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:35:18.555978 | orchestrator | Saturday 14 February 2026 03:34:53 +0000 (0:00:00.647) 0:08:14.855 ***** 2026-02-14 03:35:18.555989 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:18.556000 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:18.556011 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:18.556022 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.556033 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.556045 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.556055 | orchestrator | 2026-02-14 03:35:18.556066 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:35:18.556078 | orchestrator | Saturday 14 February 2026 03:34:54 +0000 (0:00:00.897) 0:08:15.752 ***** 2026-02-14 03:35:18.556089 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:18.556100 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:18.556110 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:18.556121 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:35:18.556132 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:35:18.556143 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:35:18.556154 | orchestrator | 2026-02-14 03:35:18.556165 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:35:18.556175 | orchestrator | Saturday 14 February 2026 03:34:54 +0000 (0:00:00.618) 0:08:16.371 ***** 2026-02-14 03:35:18.556186 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:18.556197 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:18.556229 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:18.556241 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.556251 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.556262 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.556273 | orchestrator | 2026-02-14 03:35:18.556284 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:35:18.556295 | orchestrator | Saturday 14 February 2026 03:34:55 +0000 (0:00:01.003) 0:08:17.375 ***** 2026-02-14 03:35:18.556306 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.556317 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.556328 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.556338 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.556369 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.556381 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.556392 | orchestrator | 2026-02-14 03:35:18.556403 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:35:18.556414 | orchestrator | Saturday 14 February 2026 03:34:56 +0000 (0:00:00.640) 0:08:18.016 ***** 2026-02-14 03:35:18.556425 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.556436 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.556537 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.556558 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.556570 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.556580 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.556592 | orchestrator | 2026-02-14 03:35:18.556603 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-14 03:35:18.556614 | orchestrator | Saturday 14 February 2026 03:34:57 +0000 (0:00:01.336) 0:08:19.352 ***** 2026-02-14 03:35:18.556636 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:35:18.556647 | orchestrator | 2026-02-14 03:35:18.556658 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-14 03:35:18.556668 | orchestrator | Saturday 14 February 2026 03:35:01 +0000 (0:00:03.971) 0:08:23.323 ***** 2026-02-14 03:35:18.556679 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:35:18.556690 | orchestrator | 2026-02-14 03:35:18.556701 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-14 03:35:18.556712 | orchestrator | Saturday 14 February 2026 03:35:04 +0000 (0:00:02.500) 0:08:25.824 ***** 2026-02-14 03:35:18.556722 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:35:18.556733 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:35:18.556744 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:35:18.556754 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.556765 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:35:18.556775 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:35:18.556786 | orchestrator | 2026-02-14 03:35:18.556797 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-14 03:35:18.556808 | orchestrator | Saturday 14 February 2026 03:35:05 +0000 (0:00:01.543) 0:08:27.367 ***** 2026-02-14 03:35:18.556818 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:35:18.556829 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:35:18.556840 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:35:18.556850 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:35:18.556861 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:35:18.556871 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:35:18.556882 | orchestrator | 2026-02-14 03:35:18.556893 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-14 03:35:18.556903 | orchestrator | Saturday 14 February 2026 03:35:07 +0000 (0:00:01.377) 0:08:28.745 ***** 2026-02-14 03:35:18.556915 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:35:18.556927 | orchestrator | 2026-02-14 03:35:18.556938 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-14 03:35:18.556949 | orchestrator | Saturday 14 February 2026 03:35:08 +0000 (0:00:01.259) 0:08:30.004 ***** 2026-02-14 03:35:18.556960 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:35:18.556970 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:35:18.556981 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:35:18.556992 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:35:18.557002 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:35:18.557013 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:35:18.557024 | orchestrator | 2026-02-14 03:35:18.557035 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-14 03:35:18.557045 | orchestrator | Saturday 14 February 2026 03:35:09 +0000 (0:00:01.485) 0:08:31.490 ***** 2026-02-14 03:35:18.557056 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:35:18.557067 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:35:18.557077 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:35:18.557088 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:35:18.557098 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:35:18.557109 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:35:18.557119 | orchestrator | 2026-02-14 03:35:18.557130 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-14 03:35:18.557141 | orchestrator | Saturday 14 February 2026 03:35:13 +0000 (0:00:03.391) 0:08:34.881 ***** 2026-02-14 03:35:18.557158 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:35:18.557169 | orchestrator | 2026-02-14 03:35:18.557180 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-14 03:35:18.557197 | orchestrator | Saturday 14 February 2026 03:35:14 +0000 (0:00:01.087) 0:08:35.969 ***** 2026-02-14 03:35:18.557250 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.557262 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.557273 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.557284 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.557295 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:18.557305 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:18.557316 | orchestrator | 2026-02-14 03:35:18.557327 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-14 03:35:18.557338 | orchestrator | Saturday 14 February 2026 03:35:15 +0000 (0:00:00.656) 0:08:36.625 ***** 2026-02-14 03:35:18.557348 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:35:18.557359 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:35:18.557370 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:35:18.557380 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:35:18.557391 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:35:18.557402 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:35:18.557412 | orchestrator | 2026-02-14 03:35:18.557423 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-14 03:35:18.557434 | orchestrator | Saturday 14 February 2026 03:35:17 +0000 (0:00:02.510) 0:08:39.136 ***** 2026-02-14 03:35:18.557445 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:18.557456 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:18.557467 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:18.557477 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:35:18.557500 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:35:46.693391 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:35:46.693478 | orchestrator | 2026-02-14 03:35:46.693489 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-14 03:35:46.693497 | orchestrator | 2026-02-14 03:35:46.693504 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:35:46.693510 | orchestrator | Saturday 14 February 2026 03:35:18 +0000 (0:00:00.947) 0:08:40.083 ***** 2026-02-14 03:35:46.693517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:35:46.693524 | orchestrator | 2026-02-14 03:35:46.693530 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:35:46.693535 | orchestrator | Saturday 14 February 2026 03:35:19 +0000 (0:00:00.989) 0:08:41.073 ***** 2026-02-14 03:35:46.693542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:35:46.693547 | orchestrator | 2026-02-14 03:35:46.693553 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:35:46.693559 | orchestrator | Saturday 14 February 2026 03:35:20 +0000 (0:00:00.550) 0:08:41.624 ***** 2026-02-14 03:35:46.693565 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693572 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693577 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693583 | orchestrator | 2026-02-14 03:35:46.693589 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:35:46.693595 | orchestrator | Saturday 14 February 2026 03:35:20 +0000 (0:00:00.570) 0:08:42.195 ***** 2026-02-14 03:35:46.693600 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693606 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693612 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.693618 | orchestrator | 2026-02-14 03:35:46.693623 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:35:46.693629 | orchestrator | Saturday 14 February 2026 03:35:21 +0000 (0:00:00.717) 0:08:42.912 ***** 2026-02-14 03:35:46.693635 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693641 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693646 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.693652 | orchestrator | 2026-02-14 03:35:46.693658 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:35:46.693682 | orchestrator | Saturday 14 February 2026 03:35:22 +0000 (0:00:00.787) 0:08:43.700 ***** 2026-02-14 03:35:46.693688 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693693 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693699 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.693705 | orchestrator | 2026-02-14 03:35:46.693710 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:35:46.693716 | orchestrator | Saturday 14 February 2026 03:35:23 +0000 (0:00:01.047) 0:08:44.748 ***** 2026-02-14 03:35:46.693722 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693728 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693734 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693739 | orchestrator | 2026-02-14 03:35:46.693745 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:35:46.693751 | orchestrator | Saturday 14 February 2026 03:35:23 +0000 (0:00:00.313) 0:08:45.061 ***** 2026-02-14 03:35:46.693756 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693762 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693768 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693773 | orchestrator | 2026-02-14 03:35:46.693779 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:35:46.693785 | orchestrator | Saturday 14 February 2026 03:35:23 +0000 (0:00:00.342) 0:08:45.403 ***** 2026-02-14 03:35:46.693790 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693796 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693802 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693808 | orchestrator | 2026-02-14 03:35:46.693813 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:35:46.693819 | orchestrator | Saturday 14 February 2026 03:35:24 +0000 (0:00:00.352) 0:08:45.756 ***** 2026-02-14 03:35:46.693825 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693831 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693836 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.693842 | orchestrator | 2026-02-14 03:35:46.693848 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:35:46.693865 | orchestrator | Saturday 14 February 2026 03:35:25 +0000 (0:00:00.994) 0:08:46.751 ***** 2026-02-14 03:35:46.693871 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693877 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693882 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.693888 | orchestrator | 2026-02-14 03:35:46.693894 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:35:46.693900 | orchestrator | Saturday 14 February 2026 03:35:25 +0000 (0:00:00.759) 0:08:47.510 ***** 2026-02-14 03:35:46.693905 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693911 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693917 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693922 | orchestrator | 2026-02-14 03:35:46.693928 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:35:46.693935 | orchestrator | Saturday 14 February 2026 03:35:26 +0000 (0:00:00.346) 0:08:47.857 ***** 2026-02-14 03:35:46.693942 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.693949 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.693955 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.693962 | orchestrator | 2026-02-14 03:35:46.693968 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:35:46.693975 | orchestrator | Saturday 14 February 2026 03:35:26 +0000 (0:00:00.351) 0:08:48.209 ***** 2026-02-14 03:35:46.693981 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.693988 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.693994 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.694000 | orchestrator | 2026-02-14 03:35:46.694007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:35:46.694013 | orchestrator | Saturday 14 February 2026 03:35:27 +0000 (0:00:00.653) 0:08:48.862 ***** 2026-02-14 03:35:46.694086 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.694093 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.694100 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.694107 | orchestrator | 2026-02-14 03:35:46.694113 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:35:46.694120 | orchestrator | Saturday 14 February 2026 03:35:27 +0000 (0:00:00.355) 0:08:49.218 ***** 2026-02-14 03:35:46.694127 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.694133 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.694139 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.694146 | orchestrator | 2026-02-14 03:35:46.694153 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:35:46.694159 | orchestrator | Saturday 14 February 2026 03:35:28 +0000 (0:00:00.375) 0:08:49.593 ***** 2026-02-14 03:35:46.694166 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.694173 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.694180 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.694212 | orchestrator | 2026-02-14 03:35:46.694222 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:35:46.694231 | orchestrator | Saturday 14 February 2026 03:35:28 +0000 (0:00:00.304) 0:08:49.897 ***** 2026-02-14 03:35:46.694241 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.694250 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.694261 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.694271 | orchestrator | 2026-02-14 03:35:46.694282 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:35:46.694290 | orchestrator | Saturday 14 February 2026 03:35:28 +0000 (0:00:00.566) 0:08:50.463 ***** 2026-02-14 03:35:46.694296 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.694302 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.694308 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.694313 | orchestrator | 2026-02-14 03:35:46.694319 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:35:46.694325 | orchestrator | Saturday 14 February 2026 03:35:29 +0000 (0:00:00.328) 0:08:50.792 ***** 2026-02-14 03:35:46.694330 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.694336 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.694342 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.694348 | orchestrator | 2026-02-14 03:35:46.694353 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:35:46.694359 | orchestrator | Saturday 14 February 2026 03:35:29 +0000 (0:00:00.344) 0:08:51.136 ***** 2026-02-14 03:35:46.694365 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:35:46.694371 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:35:46.694376 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:35:46.694382 | orchestrator | 2026-02-14 03:35:46.694388 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-14 03:35:46.694393 | orchestrator | Saturday 14 February 2026 03:35:30 +0000 (0:00:00.783) 0:08:51.919 ***** 2026-02-14 03:35:46.694399 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:35:46.694405 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:35:46.694411 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-14 03:35:46.694417 | orchestrator | 2026-02-14 03:35:46.694423 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-14 03:35:46.694429 | orchestrator | Saturday 14 February 2026 03:35:30 +0000 (0:00:00.419) 0:08:52.339 ***** 2026-02-14 03:35:46.694434 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:35:46.694440 | orchestrator | 2026-02-14 03:35:46.694446 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-14 03:35:46.694452 | orchestrator | Saturday 14 February 2026 03:35:32 +0000 (0:00:02.073) 0:08:54.413 ***** 2026-02-14 03:35:46.694459 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-14 03:35:46.694472 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:35:46.694478 | orchestrator | 2026-02-14 03:35:46.694484 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-14 03:35:46.694489 | orchestrator | Saturday 14 February 2026 03:35:33 +0000 (0:00:00.278) 0:08:54.691 ***** 2026-02-14 03:35:46.694502 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:35:46.694514 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:35:46.694520 | orchestrator | 2026-02-14 03:35:46.694526 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-14 03:35:46.694531 | orchestrator | Saturday 14 February 2026 03:35:41 +0000 (0:00:08.051) 0:09:02.742 ***** 2026-02-14 03:35:46.694537 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 03:35:46.694543 | orchestrator | 2026-02-14 03:35:46.694548 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-14 03:35:46.694554 | orchestrator | Saturday 14 February 2026 03:35:44 +0000 (0:00:03.530) 0:09:06.273 ***** 2026-02-14 03:35:46.694560 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:35:46.694566 | orchestrator | 2026-02-14 03:35:46.694571 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-14 03:35:46.694577 | orchestrator | Saturday 14 February 2026 03:35:45 +0000 (0:00:00.885) 0:09:07.159 ***** 2026-02-14 03:35:46.694588 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 03:36:12.951777 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 03:36:12.951886 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 03:36:12.951900 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-14 03:36:12.951912 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-14 03:36:12.951922 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-14 03:36:12.951932 | orchestrator | 2026-02-14 03:36:12.951942 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-14 03:36:12.951952 | orchestrator | Saturday 14 February 2026 03:35:46 +0000 (0:00:01.059) 0:09:08.218 ***** 2026-02-14 03:36:12.951962 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:12.951972 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:36:12.951982 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:36:12.951992 | orchestrator | 2026-02-14 03:36:12.952002 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-14 03:36:12.952012 | orchestrator | Saturday 14 February 2026 03:35:48 +0000 (0:00:02.177) 0:09:10.396 ***** 2026-02-14 03:36:12.952022 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:36:12.952033 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:36:12.952043 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952053 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:36:12.952062 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 03:36:12.952072 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952082 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:36:12.952091 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 03:36:12.952125 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952135 | orchestrator | 2026-02-14 03:36:12.952145 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-14 03:36:12.952154 | orchestrator | Saturday 14 February 2026 03:35:50 +0000 (0:00:01.252) 0:09:11.648 ***** 2026-02-14 03:36:12.952232 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952244 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952254 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952264 | orchestrator | 2026-02-14 03:36:12.952274 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-14 03:36:12.952283 | orchestrator | Saturday 14 February 2026 03:35:53 +0000 (0:00:02.961) 0:09:14.610 ***** 2026-02-14 03:36:12.952293 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.952302 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:12.952312 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:12.952324 | orchestrator | 2026-02-14 03:36:12.952336 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-14 03:36:12.952347 | orchestrator | Saturday 14 February 2026 03:35:53 +0000 (0:00:00.349) 0:09:14.960 ***** 2026-02-14 03:36:12.952358 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:12.952370 | orchestrator | 2026-02-14 03:36:12.952381 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-14 03:36:12.952391 | orchestrator | Saturday 14 February 2026 03:35:54 +0000 (0:00:00.812) 0:09:15.772 ***** 2026-02-14 03:36:12.952403 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:12.952414 | orchestrator | 2026-02-14 03:36:12.952425 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-14 03:36:12.952436 | orchestrator | Saturday 14 February 2026 03:35:54 +0000 (0:00:00.573) 0:09:16.346 ***** 2026-02-14 03:36:12.952447 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952458 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952469 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952480 | orchestrator | 2026-02-14 03:36:12.952491 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-14 03:36:12.952517 | orchestrator | Saturday 14 February 2026 03:35:56 +0000 (0:00:01.260) 0:09:17.607 ***** 2026-02-14 03:36:12.952534 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952552 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952569 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952585 | orchestrator | 2026-02-14 03:36:12.952603 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-14 03:36:12.952619 | orchestrator | Saturday 14 February 2026 03:35:57 +0000 (0:00:01.388) 0:09:18.995 ***** 2026-02-14 03:36:12.952633 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952643 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952652 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952662 | orchestrator | 2026-02-14 03:36:12.952671 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-14 03:36:12.952681 | orchestrator | Saturday 14 February 2026 03:35:59 +0000 (0:00:01.739) 0:09:20.735 ***** 2026-02-14 03:36:12.952691 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952700 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952710 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952720 | orchestrator | 2026-02-14 03:36:12.952729 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-14 03:36:12.952739 | orchestrator | Saturday 14 February 2026 03:36:01 +0000 (0:00:01.934) 0:09:22.669 ***** 2026-02-14 03:36:12.952749 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.952758 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.952768 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.952778 | orchestrator | 2026-02-14 03:36:12.952787 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:36:12.952806 | orchestrator | Saturday 14 February 2026 03:36:02 +0000 (0:00:01.484) 0:09:24.154 ***** 2026-02-14 03:36:12.952816 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.952826 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.952853 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.952863 | orchestrator | 2026-02-14 03:36:12.952872 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-14 03:36:12.952882 | orchestrator | Saturday 14 February 2026 03:36:03 +0000 (0:00:00.694) 0:09:24.849 ***** 2026-02-14 03:36:12.952892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:12.952902 | orchestrator | 2026-02-14 03:36:12.952912 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-14 03:36:12.952922 | orchestrator | Saturday 14 February 2026 03:36:04 +0000 (0:00:00.755) 0:09:25.604 ***** 2026-02-14 03:36:12.952931 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.952941 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.952951 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.952961 | orchestrator | 2026-02-14 03:36:12.952970 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-14 03:36:12.952980 | orchestrator | Saturday 14 February 2026 03:36:04 +0000 (0:00:00.349) 0:09:25.954 ***** 2026-02-14 03:36:12.952990 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:12.953000 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:12.953010 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:12.953019 | orchestrator | 2026-02-14 03:36:12.953029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-14 03:36:12.953039 | orchestrator | Saturday 14 February 2026 03:36:05 +0000 (0:00:01.234) 0:09:27.188 ***** 2026-02-14 03:36:12.953049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:36:12.953059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:36:12.953069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:36:12.953079 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.953088 | orchestrator | 2026-02-14 03:36:12.953098 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-14 03:36:12.953108 | orchestrator | Saturday 14 February 2026 03:36:06 +0000 (0:00:00.913) 0:09:28.101 ***** 2026-02-14 03:36:12.953118 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.953127 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.953137 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.953147 | orchestrator | 2026-02-14 03:36:12.953156 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-14 03:36:12.953196 | orchestrator | 2026-02-14 03:36:12.953207 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 03:36:12.953217 | orchestrator | Saturday 14 February 2026 03:36:07 +0000 (0:00:00.948) 0:09:29.050 ***** 2026-02-14 03:36:12.953227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:12.953238 | orchestrator | 2026-02-14 03:36:12.953248 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 03:36:12.953258 | orchestrator | Saturday 14 February 2026 03:36:08 +0000 (0:00:00.531) 0:09:29.581 ***** 2026-02-14 03:36:12.953268 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:12.953277 | orchestrator | 2026-02-14 03:36:12.953287 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 03:36:12.953297 | orchestrator | Saturday 14 February 2026 03:36:08 +0000 (0:00:00.793) 0:09:30.375 ***** 2026-02-14 03:36:12.953306 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.953316 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:12.953326 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:12.953342 | orchestrator | 2026-02-14 03:36:12.953356 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 03:36:12.953373 | orchestrator | Saturday 14 February 2026 03:36:09 +0000 (0:00:00.332) 0:09:30.708 ***** 2026-02-14 03:36:12.953388 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.953406 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.953416 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.953426 | orchestrator | 2026-02-14 03:36:12.953435 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 03:36:12.953445 | orchestrator | Saturday 14 February 2026 03:36:09 +0000 (0:00:00.695) 0:09:31.403 ***** 2026-02-14 03:36:12.953454 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.953470 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.953480 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.953490 | orchestrator | 2026-02-14 03:36:12.953499 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 03:36:12.953509 | orchestrator | Saturday 14 February 2026 03:36:10 +0000 (0:00:00.950) 0:09:32.354 ***** 2026-02-14 03:36:12.953519 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:12.953528 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:12.953538 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:12.953547 | orchestrator | 2026-02-14 03:36:12.953557 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 03:36:12.953566 | orchestrator | Saturday 14 February 2026 03:36:11 +0000 (0:00:00.776) 0:09:33.130 ***** 2026-02-14 03:36:12.953576 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.953586 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:12.953595 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:12.953605 | orchestrator | 2026-02-14 03:36:12.953614 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 03:36:12.953624 | orchestrator | Saturday 14 February 2026 03:36:11 +0000 (0:00:00.377) 0:09:33.507 ***** 2026-02-14 03:36:12.953633 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.953643 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:12.953652 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:12.953662 | orchestrator | 2026-02-14 03:36:12.953672 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 03:36:12.953681 | orchestrator | Saturday 14 February 2026 03:36:12 +0000 (0:00:00.374) 0:09:33.882 ***** 2026-02-14 03:36:12.953691 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:12.953700 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:12.953710 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:12.953719 | orchestrator | 2026-02-14 03:36:12.953736 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 03:36:35.820246 | orchestrator | Saturday 14 February 2026 03:36:12 +0000 (0:00:00.589) 0:09:34.472 ***** 2026-02-14 03:36:35.820357 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.820373 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.820385 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.820396 | orchestrator | 2026-02-14 03:36:35.820408 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 03:36:35.820419 | orchestrator | Saturday 14 February 2026 03:36:13 +0000 (0:00:00.728) 0:09:35.200 ***** 2026-02-14 03:36:35.820430 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.820440 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.820451 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.820462 | orchestrator | 2026-02-14 03:36:35.820473 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 03:36:35.820484 | orchestrator | Saturday 14 February 2026 03:36:14 +0000 (0:00:00.749) 0:09:35.949 ***** 2026-02-14 03:36:35.820495 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.820507 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.820517 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.820528 | orchestrator | 2026-02-14 03:36:35.820539 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 03:36:35.820573 | orchestrator | Saturday 14 February 2026 03:36:14 +0000 (0:00:00.349) 0:09:36.298 ***** 2026-02-14 03:36:35.820585 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.820596 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.820607 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.820618 | orchestrator | 2026-02-14 03:36:35.820629 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 03:36:35.820640 | orchestrator | Saturday 14 February 2026 03:36:15 +0000 (0:00:00.597) 0:09:36.895 ***** 2026-02-14 03:36:35.820651 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.820662 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.820673 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.820683 | orchestrator | 2026-02-14 03:36:35.820695 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 03:36:35.820705 | orchestrator | Saturday 14 February 2026 03:36:15 +0000 (0:00:00.386) 0:09:37.282 ***** 2026-02-14 03:36:35.820716 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.820727 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.820737 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.820750 | orchestrator | 2026-02-14 03:36:35.820762 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 03:36:35.820774 | orchestrator | Saturday 14 February 2026 03:36:16 +0000 (0:00:00.351) 0:09:37.633 ***** 2026-02-14 03:36:35.820786 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.820799 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.820811 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.820823 | orchestrator | 2026-02-14 03:36:35.820835 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 03:36:35.820847 | orchestrator | Saturday 14 February 2026 03:36:16 +0000 (0:00:00.346) 0:09:37.979 ***** 2026-02-14 03:36:35.820860 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.820872 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.820884 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.820897 | orchestrator | 2026-02-14 03:36:35.820909 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 03:36:35.820922 | orchestrator | Saturday 14 February 2026 03:36:17 +0000 (0:00:00.601) 0:09:38.581 ***** 2026-02-14 03:36:35.820934 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.820945 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.820956 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.820966 | orchestrator | 2026-02-14 03:36:35.820977 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 03:36:35.820988 | orchestrator | Saturday 14 February 2026 03:36:17 +0000 (0:00:00.334) 0:09:38.915 ***** 2026-02-14 03:36:35.820999 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.821010 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.821020 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.821031 | orchestrator | 2026-02-14 03:36:35.821042 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 03:36:35.821052 | orchestrator | Saturday 14 February 2026 03:36:17 +0000 (0:00:00.378) 0:09:39.294 ***** 2026-02-14 03:36:35.821063 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.821074 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.821085 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.821096 | orchestrator | 2026-02-14 03:36:35.821122 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 03:36:35.821133 | orchestrator | Saturday 14 February 2026 03:36:18 +0000 (0:00:00.371) 0:09:39.666 ***** 2026-02-14 03:36:35.821167 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:36:35.821183 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:36:35.821193 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:36:35.821204 | orchestrator | 2026-02-14 03:36:35.821215 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-14 03:36:35.821226 | orchestrator | Saturday 14 February 2026 03:36:18 +0000 (0:00:00.869) 0:09:40.535 ***** 2026-02-14 03:36:35.821246 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:35.821257 | orchestrator | 2026-02-14 03:36:35.821268 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 03:36:35.821279 | orchestrator | Saturday 14 February 2026 03:36:19 +0000 (0:00:00.572) 0:09:41.108 ***** 2026-02-14 03:36:35.821290 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821301 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:36:35.821312 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:36:35.821323 | orchestrator | 2026-02-14 03:36:35.821334 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 03:36:35.821344 | orchestrator | Saturday 14 February 2026 03:36:22 +0000 (0:00:02.620) 0:09:43.729 ***** 2026-02-14 03:36:35.821355 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:36:35.821366 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 03:36:35.821377 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:35.821405 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:36:35.821417 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 03:36:35.821427 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:35.821438 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:36:35.821449 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 03:36:35.821460 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:35.821470 | orchestrator | 2026-02-14 03:36:35.821481 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-14 03:36:35.821492 | orchestrator | Saturday 14 February 2026 03:36:23 +0000 (0:00:01.477) 0:09:45.207 ***** 2026-02-14 03:36:35.821502 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:36:35.821513 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:36:35.821524 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:36:35.821534 | orchestrator | 2026-02-14 03:36:35.821545 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-14 03:36:35.821556 | orchestrator | Saturday 14 February 2026 03:36:23 +0000 (0:00:00.327) 0:09:45.534 ***** 2026-02-14 03:36:35.821567 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:36:35.821578 | orchestrator | 2026-02-14 03:36:35.821588 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-14 03:36:35.821599 | orchestrator | Saturday 14 February 2026 03:36:24 +0000 (0:00:00.565) 0:09:46.099 ***** 2026-02-14 03:36:35.821611 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 03:36:35.821624 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 03:36:35.821635 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 03:36:35.821646 | orchestrator | 2026-02-14 03:36:35.821657 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-14 03:36:35.821668 | orchestrator | Saturday 14 February 2026 03:36:25 +0000 (0:00:01.196) 0:09:47.296 ***** 2026-02-14 03:36:35.821678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821689 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 03:36:35.821700 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821711 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 03:36:35.821729 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821740 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 03:36:35.821751 | orchestrator | 2026-02-14 03:36:35.821762 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 03:36:35.821773 | orchestrator | Saturday 14 February 2026 03:36:31 +0000 (0:00:05.257) 0:09:52.554 ***** 2026-02-14 03:36:35.821783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821794 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:36:35.821805 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821815 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:36:35.821826 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:36:35.821842 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:36:35.821853 | orchestrator | 2026-02-14 03:36:35.821864 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 03:36:35.821874 | orchestrator | Saturday 14 February 2026 03:36:33 +0000 (0:00:02.386) 0:09:54.941 ***** 2026-02-14 03:36:35.821885 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:36:35.821896 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:36:35.821907 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:36:35.821917 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:36:35.821928 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:36:35.821939 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:36:35.821950 | orchestrator | 2026-02-14 03:36:35.821960 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-14 03:36:35.821971 | orchestrator | Saturday 14 February 2026 03:36:34 +0000 (0:00:01.525) 0:09:56.466 ***** 2026-02-14 03:36:35.821982 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-14 03:36:35.821993 | orchestrator | 2026-02-14 03:36:35.822003 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-14 03:36:35.822014 | orchestrator | Saturday 14 February 2026 03:36:35 +0000 (0:00:00.249) 0:09:56.716 ***** 2026-02-14 03:36:35.822092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:36:35.822104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:36:35.822122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831384 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.831394 | orchestrator | 2026-02-14 03:37:19.831405 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-14 03:37:19.831416 | orchestrator | Saturday 14 February 2026 03:36:35 +0000 (0:00:00.627) 0:09:57.343 ***** 2026-02-14 03:37:19.831425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 03:37:19.831493 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.831502 | orchestrator | 2026-02-14 03:37:19.831511 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-14 03:37:19.831520 | orchestrator | Saturday 14 February 2026 03:36:36 +0000 (0:00:00.624) 0:09:57.968 ***** 2026-02-14 03:37:19.831529 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 03:37:19.831539 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 03:37:19.831548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 03:37:19.831557 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 03:37:19.831565 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 03:37:19.831574 | orchestrator | 2026-02-14 03:37:19.831583 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-14 03:37:19.831592 | orchestrator | Saturday 14 February 2026 03:37:07 +0000 (0:00:30.960) 0:10:28.928 ***** 2026-02-14 03:37:19.831600 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.831609 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:19.831618 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:19.831627 | orchestrator | 2026-02-14 03:37:19.831635 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-14 03:37:19.831644 | orchestrator | Saturday 14 February 2026 03:37:07 +0000 (0:00:00.332) 0:10:29.261 ***** 2026-02-14 03:37:19.831653 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.831662 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:19.831670 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:19.831679 | orchestrator | 2026-02-14 03:37:19.831701 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-14 03:37:19.831710 | orchestrator | Saturday 14 February 2026 03:37:08 +0000 (0:00:00.323) 0:10:29.584 ***** 2026-02-14 03:37:19.831720 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:37:19.831729 | orchestrator | 2026-02-14 03:37:19.831737 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-14 03:37:19.831747 | orchestrator | Saturday 14 February 2026 03:37:08 +0000 (0:00:00.821) 0:10:30.405 ***** 2026-02-14 03:37:19.831757 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:37:19.831768 | orchestrator | 2026-02-14 03:37:19.831778 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-14 03:37:19.831788 | orchestrator | Saturday 14 February 2026 03:37:09 +0000 (0:00:00.548) 0:10:30.953 ***** 2026-02-14 03:37:19.831798 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:37:19.831809 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:37:19.831819 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:37:19.831829 | orchestrator | 2026-02-14 03:37:19.831839 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-14 03:37:19.831849 | orchestrator | Saturday 14 February 2026 03:37:10 +0000 (0:00:01.549) 0:10:32.503 ***** 2026-02-14 03:37:19.831867 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:37:19.831877 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:37:19.831887 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:37:19.831897 | orchestrator | 2026-02-14 03:37:19.831907 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-14 03:37:19.831917 | orchestrator | Saturday 14 February 2026 03:37:12 +0000 (0:00:01.199) 0:10:33.702 ***** 2026-02-14 03:37:19.831927 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:37:19.831952 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:37:19.831963 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:37:19.831973 | orchestrator | 2026-02-14 03:37:19.831983 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-14 03:37:19.831993 | orchestrator | Saturday 14 February 2026 03:37:13 +0000 (0:00:01.748) 0:10:35.451 ***** 2026-02-14 03:37:19.832004 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 03:37:19.832020 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 03:37:19.832034 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 03:37:19.832048 | orchestrator | 2026-02-14 03:37:19.832062 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-14 03:37:19.832077 | orchestrator | Saturday 14 February 2026 03:37:16 +0000 (0:00:02.610) 0:10:38.061 ***** 2026-02-14 03:37:19.832090 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.832105 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:19.832144 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:19.832159 | orchestrator | 2026-02-14 03:37:19.832175 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-14 03:37:19.832189 | orchestrator | Saturday 14 February 2026 03:37:16 +0000 (0:00:00.365) 0:10:38.427 ***** 2026-02-14 03:37:19.832202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:37:19.832217 | orchestrator | 2026-02-14 03:37:19.832232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-14 03:37:19.832248 | orchestrator | Saturday 14 February 2026 03:37:17 +0000 (0:00:00.840) 0:10:39.268 ***** 2026-02-14 03:37:19.832263 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:19.832279 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:19.832293 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:19.832308 | orchestrator | 2026-02-14 03:37:19.832323 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-14 03:37:19.832337 | orchestrator | Saturday 14 February 2026 03:37:18 +0000 (0:00:00.339) 0:10:39.608 ***** 2026-02-14 03:37:19.832352 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.832367 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:19.832382 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:19.832392 | orchestrator | 2026-02-14 03:37:19.832401 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-14 03:37:19.832409 | orchestrator | Saturday 14 February 2026 03:37:18 +0000 (0:00:00.329) 0:10:39.937 ***** 2026-02-14 03:37:19.832418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:37:19.832428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:37:19.832436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:37:19.832445 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:19.832453 | orchestrator | 2026-02-14 03:37:19.832462 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-14 03:37:19.832471 | orchestrator | Saturday 14 February 2026 03:37:19 +0000 (0:00:00.895) 0:10:40.833 ***** 2026-02-14 03:37:19.832480 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:19.832488 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:19.832506 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:19.832515 | orchestrator | 2026-02-14 03:37:19.832524 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:37:19.832532 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-14 03:37:19.832549 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-14 03:37:19.832558 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-14 03:37:19.832566 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-14 03:37:19.832575 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-14 03:37:19.832583 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-14 03:37:19.832592 | orchestrator | 2026-02-14 03:37:19.832601 | orchestrator | 2026-02-14 03:37:19.832609 | orchestrator | 2026-02-14 03:37:19.832618 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:37:19.832627 | orchestrator | Saturday 14 February 2026 03:37:19 +0000 (0:00:00.509) 0:10:41.343 ***** 2026-02-14 03:37:19.832635 | orchestrator | =============================================================================== 2026-02-14 03:37:19.832644 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.51s 2026-02-14 03:37:19.832653 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.99s 2026-02-14 03:37:19.832661 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.96s 2026-02-14 03:37:19.832670 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.39s 2026-02-14 03:37:19.832679 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.90s 2026-02-14 03:37:19.832696 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.65s 2026-02-14 03:37:20.254484 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.36s 2026-02-14 03:37:20.254590 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.41s 2026-02-14 03:37:20.254605 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.29s 2026-02-14 03:37:20.254617 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.05s 2026-02-14 03:37:20.254628 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.33s 2026-02-14 03:37:20.254639 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.27s 2026-02-14 03:37:20.254650 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.26s 2026-02-14 03:37:20.254660 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.03s 2026-02-14 03:37:20.254671 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.97s 2026-02-14 03:37:20.254682 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.69s 2026-02-14 03:37:20.254693 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.53s 2026-02-14 03:37:20.254704 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.52s 2026-02-14 03:37:20.254715 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.39s 2026-02-14 03:37:20.254726 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.07s 2026-02-14 03:37:22.682788 | orchestrator | 2026-02-14 03:37:22 | INFO  | Task 7f6dd1aa-8f51-4146-87b0-be2d6227c3e5 (ceph-pools) was prepared for execution. 2026-02-14 03:37:22.682915 | orchestrator | 2026-02-14 03:37:22 | INFO  | It takes a moment until task 7f6dd1aa-8f51-4146-87b0-be2d6227c3e5 (ceph-pools) has been started and output is visible here. 2026-02-14 03:37:36.201632 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 03:37:36.201771 | orchestrator | 2.16.14 2026-02-14 03:37:36.201799 | orchestrator | 2026-02-14 03:37:36.201819 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-14 03:37:36.201840 | orchestrator | 2026-02-14 03:37:36.201851 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 03:37:36.201862 | orchestrator | Saturday 14 February 2026 03:37:27 +0000 (0:00:00.599) 0:00:00.599 ***** 2026-02-14 03:37:36.201874 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:37:36.201885 | orchestrator | 2026-02-14 03:37:36.201896 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 03:37:36.201907 | orchestrator | Saturday 14 February 2026 03:37:27 +0000 (0:00:00.491) 0:00:01.090 ***** 2026-02-14 03:37:36.201918 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.201929 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.201939 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.201950 | orchestrator | 2026-02-14 03:37:36.201961 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 03:37:36.201972 | orchestrator | Saturday 14 February 2026 03:37:28 +0000 (0:00:00.588) 0:00:01.678 ***** 2026-02-14 03:37:36.201982 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.201993 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202004 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202080 | orchestrator | 2026-02-14 03:37:36.202094 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 03:37:36.202146 | orchestrator | Saturday 14 February 2026 03:37:28 +0000 (0:00:00.288) 0:00:01.966 ***** 2026-02-14 03:37:36.202161 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202174 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202186 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202198 | orchestrator | 2026-02-14 03:37:36.202228 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 03:37:36.202241 | orchestrator | Saturday 14 February 2026 03:37:29 +0000 (0:00:00.747) 0:00:02.714 ***** 2026-02-14 03:37:36.202253 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202266 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202278 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202290 | orchestrator | 2026-02-14 03:37:36.202303 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 03:37:36.202316 | orchestrator | Saturday 14 February 2026 03:37:29 +0000 (0:00:00.279) 0:00:02.994 ***** 2026-02-14 03:37:36.202328 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202340 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202352 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202365 | orchestrator | 2026-02-14 03:37:36.202377 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 03:37:36.202390 | orchestrator | Saturday 14 February 2026 03:37:29 +0000 (0:00:00.279) 0:00:03.273 ***** 2026-02-14 03:37:36.202402 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202422 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202440 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202459 | orchestrator | 2026-02-14 03:37:36.202477 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 03:37:36.202496 | orchestrator | Saturday 14 February 2026 03:37:30 +0000 (0:00:00.302) 0:00:03.575 ***** 2026-02-14 03:37:36.202513 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:36.202532 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:36.202550 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:36.202570 | orchestrator | 2026-02-14 03:37:36.202589 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 03:37:36.202632 | orchestrator | Saturday 14 February 2026 03:37:30 +0000 (0:00:00.423) 0:00:03.998 ***** 2026-02-14 03:37:36.202644 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202655 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202666 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202676 | orchestrator | 2026-02-14 03:37:36.202687 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 03:37:36.202698 | orchestrator | Saturday 14 February 2026 03:37:30 +0000 (0:00:00.289) 0:00:04.288 ***** 2026-02-14 03:37:36.202709 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:37:36.202719 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:37:36.202730 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:37:36.202741 | orchestrator | 2026-02-14 03:37:36.202751 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 03:37:36.202762 | orchestrator | Saturday 14 February 2026 03:37:31 +0000 (0:00:00.637) 0:00:04.925 ***** 2026-02-14 03:37:36.202773 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:36.202783 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:36.202794 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:36.202812 | orchestrator | 2026-02-14 03:37:36.202839 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 03:37:36.202861 | orchestrator | Saturday 14 February 2026 03:37:31 +0000 (0:00:00.468) 0:00:05.394 ***** 2026-02-14 03:37:36.202879 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:37:36.202896 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:37:36.202913 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:37:36.202928 | orchestrator | 2026-02-14 03:37:36.202946 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 03:37:36.202964 | orchestrator | Saturday 14 February 2026 03:37:34 +0000 (0:00:02.153) 0:00:07.548 ***** 2026-02-14 03:37:36.202983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 03:37:36.203002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 03:37:36.203022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 03:37:36.203040 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:36.203057 | orchestrator | 2026-02-14 03:37:36.203090 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 03:37:36.203102 | orchestrator | Saturday 14 February 2026 03:37:34 +0000 (0:00:00.653) 0:00:08.201 ***** 2026-02-14 03:37:36.203149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203209 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:36.203228 | orchestrator | 2026-02-14 03:37:36.203246 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 03:37:36.203266 | orchestrator | Saturday 14 February 2026 03:37:35 +0000 (0:00:01.064) 0:00:09.266 ***** 2026-02-14 03:37:36.203297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:36.203353 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:36.203364 | orchestrator | 2026-02-14 03:37:36.203374 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 03:37:36.203385 | orchestrator | Saturday 14 February 2026 03:37:35 +0000 (0:00:00.161) 0:00:09.427 ***** 2026-02-14 03:37:36.203398 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '775cd2ba237c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 03:37:32.819970', 'end': '2026-02-14 03:37:32.870352', 'delta': '0:00:00.050382', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['775cd2ba237c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 03:37:36.203412 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26dcb1313f5c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 03:37:33.395268', 'end': '2026-02-14 03:37:33.444515', 'delta': '0:00:00.049247', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26dcb1313f5c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 03:37:36.203437 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7aff8e7c54ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 03:37:33.928821', 'end': '2026-02-14 03:37:33.972228', 'delta': '0:00:00.043407', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7aff8e7c54ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 03:37:43.140052 | orchestrator | 2026-02-14 03:37:43.140174 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 03:37:43.140182 | orchestrator | Saturday 14 February 2026 03:37:36 +0000 (0:00:00.207) 0:00:09.635 ***** 2026-02-14 03:37:43.140204 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:43.140210 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:43.140214 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:43.140219 | orchestrator | 2026-02-14 03:37:43.140223 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 03:37:43.140228 | orchestrator | Saturday 14 February 2026 03:37:36 +0000 (0:00:00.461) 0:00:10.096 ***** 2026-02-14 03:37:43.140233 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-14 03:37:43.140238 | orchestrator | 2026-02-14 03:37:43.140252 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 03:37:43.140257 | orchestrator | Saturday 14 February 2026 03:37:38 +0000 (0:00:01.650) 0:00:11.747 ***** 2026-02-14 03:37:43.140261 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140266 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140270 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140275 | orchestrator | 2026-02-14 03:37:43.140279 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 03:37:43.140283 | orchestrator | Saturday 14 February 2026 03:37:38 +0000 (0:00:00.292) 0:00:12.040 ***** 2026-02-14 03:37:43.140288 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140292 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140297 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140301 | orchestrator | 2026-02-14 03:37:43.140305 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 03:37:43.140309 | orchestrator | Saturday 14 February 2026 03:37:39 +0000 (0:00:00.830) 0:00:12.870 ***** 2026-02-14 03:37:43.140314 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140318 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140323 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140327 | orchestrator | 2026-02-14 03:37:43.140332 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 03:37:43.140336 | orchestrator | Saturday 14 February 2026 03:37:39 +0000 (0:00:00.307) 0:00:13.178 ***** 2026-02-14 03:37:43.140340 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:43.140345 | orchestrator | 2026-02-14 03:37:43.140349 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 03:37:43.140353 | orchestrator | Saturday 14 February 2026 03:37:39 +0000 (0:00:00.125) 0:00:13.304 ***** 2026-02-14 03:37:43.140358 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140362 | orchestrator | 2026-02-14 03:37:43.140367 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 03:37:43.140371 | orchestrator | Saturday 14 February 2026 03:37:40 +0000 (0:00:00.228) 0:00:13.532 ***** 2026-02-14 03:37:43.140375 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140380 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140384 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140388 | orchestrator | 2026-02-14 03:37:43.140393 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 03:37:43.140397 | orchestrator | Saturday 14 February 2026 03:37:40 +0000 (0:00:00.290) 0:00:13.823 ***** 2026-02-14 03:37:43.140401 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140406 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140410 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140417 | orchestrator | 2026-02-14 03:37:43.140424 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 03:37:43.140430 | orchestrator | Saturday 14 February 2026 03:37:40 +0000 (0:00:00.359) 0:00:14.182 ***** 2026-02-14 03:37:43.140437 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140443 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140450 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140456 | orchestrator | 2026-02-14 03:37:43.140463 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 03:37:43.140470 | orchestrator | Saturday 14 February 2026 03:37:41 +0000 (0:00:00.574) 0:00:14.756 ***** 2026-02-14 03:37:43.140483 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140490 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140497 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140505 | orchestrator | 2026-02-14 03:37:43.140512 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 03:37:43.140519 | orchestrator | Saturday 14 February 2026 03:37:41 +0000 (0:00:00.362) 0:00:15.119 ***** 2026-02-14 03:37:43.140526 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140532 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140537 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140541 | orchestrator | 2026-02-14 03:37:43.140546 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 03:37:43.140550 | orchestrator | Saturday 14 February 2026 03:37:42 +0000 (0:00:00.354) 0:00:15.474 ***** 2026-02-14 03:37:43.140555 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140559 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140563 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140568 | orchestrator | 2026-02-14 03:37:43.140572 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 03:37:43.140577 | orchestrator | Saturday 14 February 2026 03:37:42 +0000 (0:00:00.504) 0:00:15.979 ***** 2026-02-14 03:37:43.140582 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.140586 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.140590 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.140595 | orchestrator | 2026-02-14 03:37:43.140599 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 03:37:43.140604 | orchestrator | Saturday 14 February 2026 03:37:42 +0000 (0:00:00.390) 0:00:16.370 ***** 2026-02-14 03:37:43.140622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.140688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.228981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.229221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.229278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.229310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.229345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.229370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.229399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.445665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.445686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.445698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.445718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.445730 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.445744 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:43.445756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.445801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-14 03:37:43.656828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.656854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.656867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.656880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.656892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-14 03:37:43.656905 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:43.656918 | orchestrator | 2026-02-14 03:37:43.656930 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 03:37:43.656942 | orchestrator | Saturday 14 February 2026 03:37:43 +0000 (0:00:00.623) 0:00:16.993 ***** 2026-02-14 03:37:43.656962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.764688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.862908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863260 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:43.863278 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.863359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955358 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:43.955649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.133842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.133941 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:44.133958 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.133972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:44.134244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:54.335960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:54.336073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-14-02-18-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-14 03:37:54.336157 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.336185 | orchestrator | 2026-02-14 03:37:54.336208 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 03:37:54.336222 | orchestrator | Saturday 14 February 2026 03:37:44 +0000 (0:00:00.577) 0:00:17.571 ***** 2026-02-14 03:37:54.336233 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:54.336244 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:54.336255 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:54.336266 | orchestrator | 2026-02-14 03:37:54.336276 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 03:37:54.336287 | orchestrator | Saturday 14 February 2026 03:37:44 +0000 (0:00:00.850) 0:00:18.421 ***** 2026-02-14 03:37:54.336298 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:54.336308 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:54.336319 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:54.336329 | orchestrator | 2026-02-14 03:37:54.336340 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 03:37:54.336350 | orchestrator | Saturday 14 February 2026 03:37:45 +0000 (0:00:00.312) 0:00:18.734 ***** 2026-02-14 03:37:54.336361 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:54.336372 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:54.336382 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:54.336393 | orchestrator | 2026-02-14 03:37:54.336419 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 03:37:54.336430 | orchestrator | Saturday 14 February 2026 03:37:45 +0000 (0:00:00.661) 0:00:19.395 ***** 2026-02-14 03:37:54.336441 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.336452 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.336463 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.336473 | orchestrator | 2026-02-14 03:37:54.336484 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 03:37:54.336497 | orchestrator | Saturday 14 February 2026 03:37:46 +0000 (0:00:00.300) 0:00:19.695 ***** 2026-02-14 03:37:54.336509 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.336521 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.336534 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.336546 | orchestrator | 2026-02-14 03:37:54.336558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 03:37:54.336571 | orchestrator | Saturday 14 February 2026 03:37:46 +0000 (0:00:00.678) 0:00:20.374 ***** 2026-02-14 03:37:54.336583 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.336595 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.336607 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.336619 | orchestrator | 2026-02-14 03:37:54.336631 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 03:37:54.336644 | orchestrator | Saturday 14 February 2026 03:37:47 +0000 (0:00:00.321) 0:00:20.695 ***** 2026-02-14 03:37:54.336655 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 03:37:54.336668 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 03:37:54.336680 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 03:37:54.336692 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 03:37:54.336703 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 03:37:54.336716 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 03:37:54.336728 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 03:37:54.336748 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 03:37:54.336761 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 03:37:54.336774 | orchestrator | 2026-02-14 03:37:54.336786 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 03:37:54.336799 | orchestrator | Saturday 14 February 2026 03:37:48 +0000 (0:00:01.198) 0:00:21.894 ***** 2026-02-14 03:37:54.336828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 03:37:54.336842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 03:37:54.336854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 03:37:54.336865 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.336876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 03:37:54.336886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 03:37:54.336897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 03:37:54.336908 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.336918 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 03:37:54.336929 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 03:37:54.336939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 03:37:54.336950 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.336961 | orchestrator | 2026-02-14 03:37:54.336971 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 03:37:54.336982 | orchestrator | Saturday 14 February 2026 03:37:48 +0000 (0:00:00.398) 0:00:22.293 ***** 2026-02-14 03:37:54.336993 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:37:54.337004 | orchestrator | 2026-02-14 03:37:54.337015 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 03:37:54.337027 | orchestrator | Saturday 14 February 2026 03:37:49 +0000 (0:00:00.755) 0:00:23.048 ***** 2026-02-14 03:37:54.337038 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337049 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.337059 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.337070 | orchestrator | 2026-02-14 03:37:54.337081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 03:37:54.337091 | orchestrator | Saturday 14 February 2026 03:37:49 +0000 (0:00:00.334) 0:00:23.382 ***** 2026-02-14 03:37:54.337126 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337138 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.337148 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.337159 | orchestrator | 2026-02-14 03:37:54.337170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 03:37:54.337181 | orchestrator | Saturday 14 February 2026 03:37:50 +0000 (0:00:00.322) 0:00:23.705 ***** 2026-02-14 03:37:54.337192 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337203 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:37:54.337213 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:37:54.337224 | orchestrator | 2026-02-14 03:37:54.337235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 03:37:54.337246 | orchestrator | Saturday 14 February 2026 03:37:50 +0000 (0:00:00.531) 0:00:24.237 ***** 2026-02-14 03:37:54.337257 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:54.337268 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:54.337279 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:54.337289 | orchestrator | 2026-02-14 03:37:54.337300 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 03:37:54.337311 | orchestrator | Saturday 14 February 2026 03:37:51 +0000 (0:00:00.406) 0:00:24.643 ***** 2026-02-14 03:37:54.337322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:37:54.337340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:37:54.337356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:37:54.337368 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337379 | orchestrator | 2026-02-14 03:37:54.337389 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 03:37:54.337400 | orchestrator | Saturday 14 February 2026 03:37:51 +0000 (0:00:00.393) 0:00:25.036 ***** 2026-02-14 03:37:54.337411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:37:54.337422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:37:54.337433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:37:54.337444 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337454 | orchestrator | 2026-02-14 03:37:54.337465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 03:37:54.337476 | orchestrator | Saturday 14 February 2026 03:37:51 +0000 (0:00:00.375) 0:00:25.412 ***** 2026-02-14 03:37:54.337487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 03:37:54.337497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 03:37:54.337508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 03:37:54.337519 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:37:54.337529 | orchestrator | 2026-02-14 03:37:54.337540 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 03:37:54.337551 | orchestrator | Saturday 14 February 2026 03:37:52 +0000 (0:00:00.383) 0:00:25.795 ***** 2026-02-14 03:37:54.337562 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:37:54.337573 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:37:54.337584 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:37:54.337594 | orchestrator | 2026-02-14 03:37:54.337605 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 03:37:54.337616 | orchestrator | Saturday 14 February 2026 03:37:52 +0000 (0:00:00.351) 0:00:26.146 ***** 2026-02-14 03:37:54.337626 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 03:37:54.337637 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 03:37:54.337648 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 03:37:54.337659 | orchestrator | 2026-02-14 03:37:54.337669 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 03:37:54.337680 | orchestrator | Saturday 14 February 2026 03:37:53 +0000 (0:00:00.801) 0:00:26.948 ***** 2026-02-14 03:37:54.337691 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:37:54.337709 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:39:34.601387 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:39:34.601491 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 03:39:34.601505 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 03:39:34.601516 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 03:39:34.601525 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 03:39:34.601534 | orchestrator | 2026-02-14 03:39:34.601544 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 03:39:34.601554 | orchestrator | Saturday 14 February 2026 03:37:54 +0000 (0:00:00.820) 0:00:27.769 ***** 2026-02-14 03:39:34.601563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 03:39:34.601572 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 03:39:34.601580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 03:39:34.601589 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 03:39:34.601620 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 03:39:34.601629 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 03:39:34.601638 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 03:39:34.601646 | orchestrator | 2026-02-14 03:39:34.601655 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-14 03:39:34.601664 | orchestrator | Saturday 14 February 2026 03:37:56 +0000 (0:00:01.700) 0:00:29.469 ***** 2026-02-14 03:39:34.601673 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:39:34.601682 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:39:34.601691 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-14 03:39:34.601700 | orchestrator | 2026-02-14 03:39:34.601709 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-14 03:39:34.601717 | orchestrator | Saturday 14 February 2026 03:37:56 +0000 (0:00:00.385) 0:00:29.854 ***** 2026-02-14 03:39:34.601728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:39:34.601739 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:39:34.601762 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:39:34.601771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:39:34.601781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-14 03:39:34.601790 | orchestrator | 2026-02-14 03:39:34.601799 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-14 03:39:34.601808 | orchestrator | Saturday 14 February 2026 03:38:41 +0000 (0:00:45.459) 0:01:15.313 ***** 2026-02-14 03:39:34.601817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601843 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601851 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601869 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-14 03:39:34.601878 | orchestrator | 2026-02-14 03:39:34.601886 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-14 03:39:34.601895 | orchestrator | Saturday 14 February 2026 03:39:05 +0000 (0:00:23.237) 0:01:38.550 ***** 2026-02-14 03:39:34.601918 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601934 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601945 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601965 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601975 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.601985 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 03:39:34.601995 | orchestrator | 2026-02-14 03:39:34.602010 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-14 03:39:34.602113 | orchestrator | Saturday 14 February 2026 03:39:16 +0000 (0:00:11.887) 0:01:50.438 ***** 2026-02-14 03:39:34.602128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602138 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602146 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602155 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602164 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602173 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602210 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602220 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602228 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602237 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602245 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602254 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602262 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602271 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602280 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 03:39:34.602297 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 03:39:34.602305 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 03:39:34.602314 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-14 03:39:34.602323 | orchestrator | 2026-02-14 03:39:34.602332 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:39:34.602347 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-14 03:39:34.602358 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-14 03:39:34.602367 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-14 03:39:34.602376 | orchestrator | 2026-02-14 03:39:34.602385 | orchestrator | 2026-02-14 03:39:34.602394 | orchestrator | 2026-02-14 03:39:34.602402 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:39:34.602411 | orchestrator | Saturday 14 February 2026 03:39:34 +0000 (0:00:17.573) 0:02:08.011 ***** 2026-02-14 03:39:34.602420 | orchestrator | =============================================================================== 2026-02-14 03:39:34.602435 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.46s 2026-02-14 03:39:34.602446 | orchestrator | generate keys ---------------------------------------------------------- 23.24s 2026-02-14 03:39:34.602457 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.57s 2026-02-14 03:39:34.602467 | orchestrator | get keys from monitors ------------------------------------------------- 11.89s 2026-02-14 03:39:34.602478 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.15s 2026-02-14 03:39:34.602489 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.70s 2026-02-14 03:39:34.602500 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2026-02-14 03:39:34.602511 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.20s 2026-02-14 03:39:34.602521 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.06s 2026-02-14 03:39:34.602532 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.85s 2026-02-14 03:39:34.602543 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.83s 2026-02-14 03:39:34.602554 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2026-02-14 03:39:34.602565 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.80s 2026-02-14 03:39:34.602585 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-02-14 03:39:34.967581 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.75s 2026-02-14 03:39:34.967664 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-02-14 03:39:34.967674 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2026-02-14 03:39:34.967682 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.65s 2026-02-14 03:39:34.967689 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-02-14 03:39:34.967697 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2026-02-14 03:39:37.313546 | orchestrator | 2026-02-14 03:39:37 | INFO  | Task 2f62c5f3-22ce-4550-94a9-33347430f2c3 (copy-ceph-keys) was prepared for execution. 2026-02-14 03:39:37.314568 | orchestrator | 2026-02-14 03:39:37 | INFO  | It takes a moment until task 2f62c5f3-22ce-4550-94a9-33347430f2c3 (copy-ceph-keys) has been started and output is visible here. 2026-02-14 03:40:16.299907 | orchestrator | 2026-02-14 03:40:16.300023 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-14 03:40:16.300075 | orchestrator | 2026-02-14 03:40:16.300089 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-14 03:40:16.300100 | orchestrator | Saturday 14 February 2026 03:39:41 +0000 (0:00:00.160) 0:00:00.160 ***** 2026-02-14 03:40:16.300111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-14 03:40:16.300124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300136 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300147 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 03:40:16.300158 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300169 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-14 03:40:16.300179 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-14 03:40:16.300190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-14 03:40:16.300227 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-14 03:40:16.300238 | orchestrator | 2026-02-14 03:40:16.300249 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-14 03:40:16.300260 | orchestrator | Saturday 14 February 2026 03:39:46 +0000 (0:00:04.773) 0:00:04.934 ***** 2026-02-14 03:40:16.300271 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-14 03:40:16.300296 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300307 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300318 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 03:40:16.300329 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300339 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-14 03:40:16.300350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-14 03:40:16.300360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-14 03:40:16.300371 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-14 03:40:16.300382 | orchestrator | 2026-02-14 03:40:16.300392 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-14 03:40:16.300403 | orchestrator | Saturday 14 February 2026 03:39:50 +0000 (0:00:04.377) 0:00:09.312 ***** 2026-02-14 03:40:16.300415 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-14 03:40:16.300426 | orchestrator | 2026-02-14 03:40:16.300437 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-14 03:40:16.300449 | orchestrator | Saturday 14 February 2026 03:39:51 +0000 (0:00:00.958) 0:00:10.271 ***** 2026-02-14 03:40:16.300462 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-14 03:40:16.300474 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300487 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300500 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 03:40:16.300513 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300526 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-14 03:40:16.300538 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-14 03:40:16.300550 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-14 03:40:16.300562 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-14 03:40:16.300572 | orchestrator | 2026-02-14 03:40:16.300583 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-14 03:40:16.300594 | orchestrator | Saturday 14 February 2026 03:40:04 +0000 (0:00:13.127) 0:00:23.398 ***** 2026-02-14 03:40:16.300604 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-14 03:40:16.300615 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-14 03:40:16.300626 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-14 03:40:16.300637 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-14 03:40:16.300665 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-14 03:40:16.300685 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-14 03:40:16.300696 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-14 03:40:16.300707 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-14 03:40:16.300718 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-14 03:40:16.300729 | orchestrator | 2026-02-14 03:40:16.300740 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-14 03:40:16.300750 | orchestrator | Saturday 14 February 2026 03:40:08 +0000 (0:00:04.049) 0:00:27.448 ***** 2026-02-14 03:40:16.300762 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-14 03:40:16.300773 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300784 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300794 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 03:40:16.300805 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-14 03:40:16.300816 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-14 03:40:16.300827 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-14 03:40:16.300837 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-14 03:40:16.300848 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-14 03:40:16.300859 | orchestrator | 2026-02-14 03:40:16.300870 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:40:16.300886 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:40:16.300899 | orchestrator | 2026-02-14 03:40:16.300910 | orchestrator | 2026-02-14 03:40:16.300921 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:40:16.300932 | orchestrator | Saturday 14 February 2026 03:40:15 +0000 (0:00:07.089) 0:00:34.537 ***** 2026-02-14 03:40:16.300942 | orchestrator | =============================================================================== 2026-02-14 03:40:16.300953 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.13s 2026-02-14 03:40:16.300964 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.09s 2026-02-14 03:40:16.300975 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.77s 2026-02-14 03:40:16.300985 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.38s 2026-02-14 03:40:16.300996 | orchestrator | Check if target directories exist --------------------------------------- 4.05s 2026-02-14 03:40:16.301007 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2026-02-14 03:40:28.741619 | orchestrator | 2026-02-14 03:40:28 | INFO  | Task 20f56fde-a1a7-40df-a5b8-ab7ebd5cf3d1 (cephclient) was prepared for execution. 2026-02-14 03:40:28.741737 | orchestrator | 2026-02-14 03:40:28 | INFO  | It takes a moment until task 20f56fde-a1a7-40df-a5b8-ab7ebd5cf3d1 (cephclient) has been started and output is visible here. 2026-02-14 03:41:28.608361 | orchestrator | 2026-02-14 03:41:28.608481 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-14 03:41:28.608499 | orchestrator | 2026-02-14 03:41:28.608512 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-14 03:41:28.608523 | orchestrator | Saturday 14 February 2026 03:40:32 +0000 (0:00:00.232) 0:00:00.232 ***** 2026-02-14 03:41:28.608535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-14 03:41:28.608573 | orchestrator | 2026-02-14 03:41:28.608585 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-14 03:41:28.608596 | orchestrator | Saturday 14 February 2026 03:40:33 +0000 (0:00:00.236) 0:00:00.468 ***** 2026-02-14 03:41:28.608608 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-14 03:41:28.608619 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-14 03:41:28.608630 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-14 03:41:28.608642 | orchestrator | 2026-02-14 03:41:28.608653 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-14 03:41:28.608664 | orchestrator | Saturday 14 February 2026 03:40:34 +0000 (0:00:01.214) 0:00:01.683 ***** 2026-02-14 03:41:28.608676 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-14 03:41:28.608687 | orchestrator | 2026-02-14 03:41:28.608698 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-14 03:41:28.608709 | orchestrator | Saturday 14 February 2026 03:40:35 +0000 (0:00:01.431) 0:00:03.114 ***** 2026-02-14 03:41:28.608720 | orchestrator | changed: [testbed-manager] 2026-02-14 03:41:28.608731 | orchestrator | 2026-02-14 03:41:28.608742 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-14 03:41:28.608753 | orchestrator | Saturday 14 February 2026 03:40:36 +0000 (0:00:00.911) 0:00:04.026 ***** 2026-02-14 03:41:28.608764 | orchestrator | changed: [testbed-manager] 2026-02-14 03:41:28.608775 | orchestrator | 2026-02-14 03:41:28.608786 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-14 03:41:28.608797 | orchestrator | Saturday 14 February 2026 03:40:37 +0000 (0:00:00.907) 0:00:04.933 ***** 2026-02-14 03:41:28.608808 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-14 03:41:28.608820 | orchestrator | ok: [testbed-manager] 2026-02-14 03:41:28.608831 | orchestrator | 2026-02-14 03:41:28.608841 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-14 03:41:28.608852 | orchestrator | Saturday 14 February 2026 03:41:18 +0000 (0:00:41.064) 0:00:45.997 ***** 2026-02-14 03:41:28.608864 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-14 03:41:28.608875 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-14 03:41:28.608886 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-14 03:41:28.608897 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-14 03:41:28.608908 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-14 03:41:28.608919 | orchestrator | 2026-02-14 03:41:28.608930 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-14 03:41:28.608941 | orchestrator | Saturday 14 February 2026 03:41:22 +0000 (0:00:04.119) 0:00:50.117 ***** 2026-02-14 03:41:28.608952 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-14 03:41:28.608963 | orchestrator | 2026-02-14 03:41:28.608974 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-14 03:41:28.608985 | orchestrator | Saturday 14 February 2026 03:41:23 +0000 (0:00:00.454) 0:00:50.571 ***** 2026-02-14 03:41:28.608996 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:41:28.609007 | orchestrator | 2026-02-14 03:41:28.609018 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-14 03:41:28.609049 | orchestrator | Saturday 14 February 2026 03:41:23 +0000 (0:00:00.144) 0:00:50.716 ***** 2026-02-14 03:41:28.609060 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:41:28.609071 | orchestrator | 2026-02-14 03:41:28.609082 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-14 03:41:28.609093 | orchestrator | Saturday 14 February 2026 03:41:23 +0000 (0:00:00.516) 0:00:51.233 ***** 2026-02-14 03:41:28.609119 | orchestrator | changed: [testbed-manager] 2026-02-14 03:41:28.609131 | orchestrator | 2026-02-14 03:41:28.609142 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-14 03:41:28.609165 | orchestrator | Saturday 14 February 2026 03:41:25 +0000 (0:00:01.497) 0:00:52.730 ***** 2026-02-14 03:41:28.609176 | orchestrator | changed: [testbed-manager] 2026-02-14 03:41:28.609189 | orchestrator | 2026-02-14 03:41:28.609207 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-14 03:41:28.609226 | orchestrator | Saturday 14 February 2026 03:41:26 +0000 (0:00:00.693) 0:00:53.424 ***** 2026-02-14 03:41:28.609242 | orchestrator | changed: [testbed-manager] 2026-02-14 03:41:28.609259 | orchestrator | 2026-02-14 03:41:28.609275 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-14 03:41:28.609293 | orchestrator | Saturday 14 February 2026 03:41:26 +0000 (0:00:00.639) 0:00:54.064 ***** 2026-02-14 03:41:28.609310 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-14 03:41:28.609327 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-14 03:41:28.609346 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-14 03:41:28.609364 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-14 03:41:28.609382 | orchestrator | 2026-02-14 03:41:28.609400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:41:28.609419 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 03:41:28.609439 | orchestrator | 2026-02-14 03:41:28.609454 | orchestrator | 2026-02-14 03:41:28.609485 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:41:28.609497 | orchestrator | Saturday 14 February 2026 03:41:28 +0000 (0:00:01.466) 0:00:55.530 ***** 2026-02-14 03:41:28.609508 | orchestrator | =============================================================================== 2026-02-14 03:41:28.609519 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.06s 2026-02-14 03:41:28.609530 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.12s 2026-02-14 03:41:28.609541 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.50s 2026-02-14 03:41:28.609552 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2026-02-14 03:41:28.609563 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.43s 2026-02-14 03:41:28.609574 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.21s 2026-02-14 03:41:28.609585 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2026-02-14 03:41:28.609595 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2026-02-14 03:41:28.609606 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-02-14 03:41:28.609617 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-02-14 03:41:28.609628 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.52s 2026-02-14 03:41:28.609639 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2026-02-14 03:41:28.609650 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-02-14 03:41:28.609661 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-14 03:41:30.962606 | orchestrator | 2026-02-14 03:41:30 | INFO  | Task 579ce5c0-cad9-45c6-9da0-1a04fb01cca3 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-14 03:41:30.962707 | orchestrator | 2026-02-14 03:41:30 | INFO  | It takes a moment until task 579ce5c0-cad9-45c6-9da0-1a04fb01cca3 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-14 03:42:53.147539 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 03:42:53.147656 | orchestrator | 2.16.14 2026-02-14 03:42:53.147674 | orchestrator | 2026-02-14 03:42:53.147687 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-14 03:42:53.147699 | orchestrator | 2026-02-14 03:42:53.147710 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-14 03:42:53.147746 | orchestrator | Saturday 14 February 2026 03:41:35 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-14 03:42:53.147758 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.147770 | orchestrator | 2026-02-14 03:42:53.147781 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-14 03:42:53.147792 | orchestrator | Saturday 14 February 2026 03:41:37 +0000 (0:00:01.948) 0:00:02.227 ***** 2026-02-14 03:42:53.147803 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.147814 | orchestrator | 2026-02-14 03:42:53.147825 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-14 03:42:53.147835 | orchestrator | Saturday 14 February 2026 03:41:38 +0000 (0:00:01.047) 0:00:03.275 ***** 2026-02-14 03:42:53.147846 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.147857 | orchestrator | 2026-02-14 03:42:53.147868 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-14 03:42:53.147879 | orchestrator | Saturday 14 February 2026 03:41:39 +0000 (0:00:01.045) 0:00:04.321 ***** 2026-02-14 03:42:53.147889 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.147900 | orchestrator | 2026-02-14 03:42:53.147911 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-14 03:42:53.147922 | orchestrator | Saturday 14 February 2026 03:41:40 +0000 (0:00:01.156) 0:00:05.478 ***** 2026-02-14 03:42:53.147933 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.147943 | orchestrator | 2026-02-14 03:42:53.147954 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-14 03:42:53.147965 | orchestrator | Saturday 14 February 2026 03:41:41 +0000 (0:00:01.065) 0:00:06.543 ***** 2026-02-14 03:42:53.147990 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.148001 | orchestrator | 2026-02-14 03:42:53.148013 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-14 03:42:53.148130 | orchestrator | Saturday 14 February 2026 03:41:42 +0000 (0:00:01.052) 0:00:07.596 ***** 2026-02-14 03:42:53.148145 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.148158 | orchestrator | 2026-02-14 03:42:53.148170 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-14 03:42:53.148182 | orchestrator | Saturday 14 February 2026 03:41:44 +0000 (0:00:02.112) 0:00:09.708 ***** 2026-02-14 03:42:53.148194 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.148206 | orchestrator | 2026-02-14 03:42:53.148219 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-14 03:42:53.148231 | orchestrator | Saturday 14 February 2026 03:41:45 +0000 (0:00:01.231) 0:00:10.939 ***** 2026-02-14 03:42:53.148243 | orchestrator | changed: [testbed-manager] 2026-02-14 03:42:53.148255 | orchestrator | 2026-02-14 03:42:53.148267 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-14 03:42:53.148279 | orchestrator | Saturday 14 February 2026 03:42:28 +0000 (0:00:42.343) 0:00:53.283 ***** 2026-02-14 03:42:53.148291 | orchestrator | skipping: [testbed-manager] 2026-02-14 03:42:53.148303 | orchestrator | 2026-02-14 03:42:53.148315 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-14 03:42:53.148327 | orchestrator | 2026-02-14 03:42:53.148339 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-14 03:42:53.148351 | orchestrator | Saturday 14 February 2026 03:42:28 +0000 (0:00:00.173) 0:00:53.456 ***** 2026-02-14 03:42:53.148363 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:42:53.148376 | orchestrator | 2026-02-14 03:42:53.148388 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-14 03:42:53.148401 | orchestrator | 2026-02-14 03:42:53.148412 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-14 03:42:53.148423 | orchestrator | Saturday 14 February 2026 03:42:40 +0000 (0:00:11.766) 0:01:05.222 ***** 2026-02-14 03:42:53.148434 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:42:53.148444 | orchestrator | 2026-02-14 03:42:53.148455 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-14 03:42:53.148475 | orchestrator | 2026-02-14 03:42:53.148487 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-14 03:42:53.148498 | orchestrator | Saturday 14 February 2026 03:42:41 +0000 (0:00:01.186) 0:01:06.409 ***** 2026-02-14 03:42:53.148509 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:42:53.148520 | orchestrator | 2026-02-14 03:42:53.148531 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:42:53.148543 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 03:42:53.148556 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:42:53.148567 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:42:53.148579 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 03:42:53.148589 | orchestrator | 2026-02-14 03:42:53.148600 | orchestrator | 2026-02-14 03:42:53.148611 | orchestrator | 2026-02-14 03:42:53.148622 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:42:53.148633 | orchestrator | Saturday 14 February 2026 03:42:52 +0000 (0:00:11.388) 0:01:17.797 ***** 2026-02-14 03:42:53.148643 | orchestrator | =============================================================================== 2026-02-14 03:42:53.148655 | orchestrator | Create admin user ------------------------------------------------------ 42.34s 2026-02-14 03:42:53.148683 | orchestrator | Restart ceph manager service ------------------------------------------- 24.34s 2026-02-14 03:42:53.148695 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2026-02-14 03:42:53.148706 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.95s 2026-02-14 03:42:53.148717 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2026-02-14 03:42:53.148728 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.16s 2026-02-14 03:42:53.148739 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.07s 2026-02-14 03:42:53.148750 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.05s 2026-02-14 03:42:53.148761 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.05s 2026-02-14 03:42:53.148771 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2026-02-14 03:42:53.148782 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-02-14 03:42:53.459618 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-14 03:42:55.461833 | orchestrator | 2026-02-14 03:42:55 | INFO  | Task e5eeac50-c12d-43ea-8f53-8facdd7629ae (keystone) was prepared for execution. 2026-02-14 03:42:55.461932 | orchestrator | 2026-02-14 03:42:55 | INFO  | It takes a moment until task e5eeac50-c12d-43ea-8f53-8facdd7629ae (keystone) has been started and output is visible here. 2026-02-14 03:43:02.246436 | orchestrator | 2026-02-14 03:43:02.246546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:43:02.246564 | orchestrator | 2026-02-14 03:43:02.246576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:43:02.246603 | orchestrator | Saturday 14 February 2026 03:42:59 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-14 03:43:02.246615 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:43:02.246627 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:43:02.246638 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:43:02.246649 | orchestrator | 2026-02-14 03:43:02.246660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:43:02.246672 | orchestrator | Saturday 14 February 2026 03:42:59 +0000 (0:00:00.301) 0:00:00.558 ***** 2026-02-14 03:43:02.246704 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-14 03:43:02.246715 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-14 03:43:02.246726 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-14 03:43:02.246737 | orchestrator | 2026-02-14 03:43:02.246747 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-14 03:43:02.246758 | orchestrator | 2026-02-14 03:43:02.246769 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:43:02.246780 | orchestrator | Saturday 14 February 2026 03:43:00 +0000 (0:00:00.432) 0:00:00.990 ***** 2026-02-14 03:43:02.246797 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:43:02.246817 | orchestrator | 2026-02-14 03:43:02.246835 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-14 03:43:02.246853 | orchestrator | Saturday 14 February 2026 03:43:00 +0000 (0:00:00.564) 0:00:01.555 ***** 2026-02-14 03:43:02.246879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:02.246904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:02.246962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:02.247002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:02.247169 | orchestrator | 2026-02-14 03:43:02.247191 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-14 03:43:02.247224 | orchestrator | Saturday 14 February 2026 03:43:02 +0000 (0:00:01.431) 0:00:02.987 ***** 2026-02-14 03:43:07.888613 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:07.888722 | orchestrator | 2026-02-14 03:43:07.888748 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-14 03:43:07.888790 | orchestrator | Saturday 14 February 2026 03:43:02 +0000 (0:00:00.276) 0:00:03.263 ***** 2026-02-14 03:43:07.888811 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:07.888833 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:07.888846 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:07.888857 | orchestrator | 2026-02-14 03:43:07.888869 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-14 03:43:07.888880 | orchestrator | Saturday 14 February 2026 03:43:02 +0000 (0:00:00.347) 0:00:03.611 ***** 2026-02-14 03:43:07.888892 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:43:07.888903 | orchestrator | 2026-02-14 03:43:07.888914 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:43:07.888925 | orchestrator | Saturday 14 February 2026 03:43:03 +0000 (0:00:00.833) 0:00:04.444 ***** 2026-02-14 03:43:07.888936 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:43:07.888947 | orchestrator | 2026-02-14 03:43:07.888958 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-14 03:43:07.888969 | orchestrator | Saturday 14 February 2026 03:43:04 +0000 (0:00:00.575) 0:00:05.019 ***** 2026-02-14 03:43:07.888986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:07.889002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:07.889016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:07.889129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:07.889271 | orchestrator | 2026-02-14 03:43:07.889284 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-14 03:43:07.889297 | orchestrator | Saturday 14 February 2026 03:43:07 +0000 (0:00:03.002) 0:00:08.022 ***** 2026-02-14 03:43:07.889323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:08.666607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:08.666758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:08.666800 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:08.666829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:08.666866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:08.666884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:08.666896 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:08.666927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:08.666940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:08.666952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:08.666971 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:08.666983 | orchestrator | 2026-02-14 03:43:08.666995 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-14 03:43:08.667008 | orchestrator | Saturday 14 February 2026 03:43:07 +0000 (0:00:00.613) 0:00:08.635 ***** 2026-02-14 03:43:08.667020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:08.667108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:08.667131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:11.863688 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:11.863801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:11.863821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:11.863860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:11.863873 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:11.863900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:11.863914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:11.863943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:11.863956 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:11.863967 | orchestrator | 2026-02-14 03:43:11.863979 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-14 03:43:11.863991 | orchestrator | Saturday 14 February 2026 03:43:08 +0000 (0:00:00.777) 0:00:09.413 ***** 2026-02-14 03:43:11.864003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:11.864202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:11.864231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:11.864258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:16.433997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:16.434230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:43:16.434249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:16.434261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:16.434288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:16.434300 | orchestrator | 2026-02-14 03:43:16.434314 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-14 03:43:16.434327 | orchestrator | Saturday 14 February 2026 03:43:11 +0000 (0:00:03.196) 0:00:12.609 ***** 2026-02-14 03:43:16.434358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:16.434372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:16.434396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:16.434438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:16.434457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:16.434478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:19.916694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:19.916834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:19.916851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:43:19.916863 | orchestrator | 2026-02-14 03:43:19.916878 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-14 03:43:19.916891 | orchestrator | Saturday 14 February 2026 03:43:16 +0000 (0:00:04.570) 0:00:17.179 ***** 2026-02-14 03:43:19.916902 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:43:19.916914 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:43:19.916925 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:43:19.916936 | orchestrator | 2026-02-14 03:43:19.916947 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-14 03:43:19.916958 | orchestrator | Saturday 14 February 2026 03:43:17 +0000 (0:00:01.330) 0:00:18.510 ***** 2026-02-14 03:43:19.916968 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:19.916979 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:19.916990 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:19.917001 | orchestrator | 2026-02-14 03:43:19.917011 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-14 03:43:19.917069 | orchestrator | Saturday 14 February 2026 03:43:18 +0000 (0:00:00.763) 0:00:19.274 ***** 2026-02-14 03:43:19.917081 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:19.917092 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:19.917103 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:19.917114 | orchestrator | 2026-02-14 03:43:19.917140 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-14 03:43:19.917151 | orchestrator | Saturday 14 February 2026 03:43:19 +0000 (0:00:00.526) 0:00:19.801 ***** 2026-02-14 03:43:19.917162 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:19.917173 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:19.917184 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:19.917195 | orchestrator | 2026-02-14 03:43:19.917207 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-14 03:43:19.917220 | orchestrator | Saturday 14 February 2026 03:43:19 +0000 (0:00:00.312) 0:00:20.113 ***** 2026-02-14 03:43:19.917256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:19.917280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:19.917295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:19.917308 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:19.917323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:19.917343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:19.917356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:19.917379 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:19.917401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-14 03:43:38.164289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 03:43:38.164367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 03:43:38.164374 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:38.164380 | orchestrator | 2026-02-14 03:43:38.164385 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:43:38.164390 | orchestrator | Saturday 14 February 2026 03:43:19 +0000 (0:00:00.544) 0:00:20.658 ***** 2026-02-14 03:43:38.164394 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:38.164398 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:38.164402 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:38.164405 | orchestrator | 2026-02-14 03:43:38.164409 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-14 03:43:38.164413 | orchestrator | Saturday 14 February 2026 03:43:20 +0000 (0:00:00.286) 0:00:20.944 ***** 2026-02-14 03:43:38.164417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-14 03:43:38.164422 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-14 03:43:38.164440 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-14 03:43:38.164444 | orchestrator | 2026-02-14 03:43:38.164457 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-14 03:43:38.164461 | orchestrator | Saturday 14 February 2026 03:43:21 +0000 (0:00:01.803) 0:00:22.747 ***** 2026-02-14 03:43:38.164465 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:43:38.164469 | orchestrator | 2026-02-14 03:43:38.164473 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-14 03:43:38.164476 | orchestrator | Saturday 14 February 2026 03:43:22 +0000 (0:00:00.907) 0:00:23.655 ***** 2026-02-14 03:43:38.164480 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:43:38.164484 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:43:38.164488 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:43:38.164491 | orchestrator | 2026-02-14 03:43:38.164495 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-14 03:43:38.164499 | orchestrator | Saturday 14 February 2026 03:43:23 +0000 (0:00:00.573) 0:00:24.228 ***** 2026-02-14 03:43:38.164503 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:43:38.164506 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 03:43:38.164510 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 03:43:38.164514 | orchestrator | 2026-02-14 03:43:38.164518 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-14 03:43:38.164522 | orchestrator | Saturday 14 February 2026 03:43:24 +0000 (0:00:01.060) 0:00:25.289 ***** 2026-02-14 03:43:38.164526 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:43:38.164530 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:43:38.164534 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:43:38.164538 | orchestrator | 2026-02-14 03:43:38.164542 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-14 03:43:38.164545 | orchestrator | Saturday 14 February 2026 03:43:25 +0000 (0:00:00.497) 0:00:25.786 ***** 2026-02-14 03:43:38.164549 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-14 03:43:38.164553 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-14 03:43:38.164557 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-14 03:43:38.164561 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-14 03:43:38.164565 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-14 03:43:38.164568 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-14 03:43:38.164572 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-14 03:43:38.164576 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-14 03:43:38.164588 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-14 03:43:38.164592 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-14 03:43:38.164596 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-14 03:43:38.164599 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-14 03:43:38.164603 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-14 03:43:38.164607 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-14 03:43:38.164611 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-14 03:43:38.164614 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:43:38.164622 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:43:38.164625 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:43:38.164629 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:43:38.164633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:43:38.164637 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:43:38.164640 | orchestrator | 2026-02-14 03:43:38.164644 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-14 03:43:38.164648 | orchestrator | Saturday 14 February 2026 03:43:33 +0000 (0:00:08.272) 0:00:34.059 ***** 2026-02-14 03:43:38.164652 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:43:38.164655 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:43:38.164659 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:43:38.164663 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:43:38.164666 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:43:38.164670 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:43:38.164674 | orchestrator | 2026-02-14 03:43:38.164678 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-14 03:43:38.164684 | orchestrator | Saturday 14 February 2026 03:43:35 +0000 (0:00:02.593) 0:00:36.653 ***** 2026-02-14 03:43:38.164689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:43:38.164698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:45:20.549183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-14 03:45:20.549360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-14 03:45:20.549486 | orchestrator | 2026-02-14 03:45:20.549499 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:45:20.549512 | orchestrator | Saturday 14 February 2026 03:43:38 +0000 (0:00:02.249) 0:00:38.902 ***** 2026-02-14 03:45:20.549523 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:45:20.549535 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:45:20.549546 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:45:20.549557 | orchestrator | 2026-02-14 03:45:20.549567 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-14 03:45:20.549578 | orchestrator | Saturday 14 February 2026 03:43:38 +0000 (0:00:00.503) 0:00:39.406 ***** 2026-02-14 03:45:20.549589 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.549602 | orchestrator | 2026-02-14 03:45:20.549613 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-14 03:45:20.549626 | orchestrator | Saturday 14 February 2026 03:43:41 +0000 (0:00:02.446) 0:00:41.852 ***** 2026-02-14 03:45:20.549638 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.549650 | orchestrator | 2026-02-14 03:45:20.549663 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-14 03:45:20.549675 | orchestrator | Saturday 14 February 2026 03:43:43 +0000 (0:00:02.186) 0:00:44.039 ***** 2026-02-14 03:45:20.549687 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:45:20.549700 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:45:20.549712 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:45:20.549724 | orchestrator | 2026-02-14 03:45:20.549736 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-14 03:45:20.549749 | orchestrator | Saturday 14 February 2026 03:43:44 +0000 (0:00:00.854) 0:00:44.894 ***** 2026-02-14 03:45:20.549761 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:45:20.549773 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:45:20.549785 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:45:20.549797 | orchestrator | 2026-02-14 03:45:20.549809 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-14 03:45:20.549828 | orchestrator | Saturday 14 February 2026 03:43:44 +0000 (0:00:00.315) 0:00:45.209 ***** 2026-02-14 03:45:20.549841 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:45:20.549854 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:45:20.549867 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:45:20.549877 | orchestrator | 2026-02-14 03:45:20.549888 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-14 03:45:20.549899 | orchestrator | Saturday 14 February 2026 03:43:44 +0000 (0:00:00.530) 0:00:45.740 ***** 2026-02-14 03:45:20.549909 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.549920 | orchestrator | 2026-02-14 03:45:20.549931 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-14 03:45:20.549941 | orchestrator | Saturday 14 February 2026 03:43:59 +0000 (0:00:14.692) 0:01:00.432 ***** 2026-02-14 03:45:20.549952 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.549962 | orchestrator | 2026-02-14 03:45:20.549973 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-14 03:45:20.549984 | orchestrator | Saturday 14 February 2026 03:44:10 +0000 (0:00:10.493) 0:01:10.926 ***** 2026-02-14 03:45:20.550001 | orchestrator | 2026-02-14 03:45:20.550012 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-14 03:45:20.550138 | orchestrator | Saturday 14 February 2026 03:44:10 +0000 (0:00:00.069) 0:01:10.995 ***** 2026-02-14 03:45:20.550149 | orchestrator | 2026-02-14 03:45:20.550160 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-14 03:45:20.550171 | orchestrator | Saturday 14 February 2026 03:44:10 +0000 (0:00:00.076) 0:01:11.072 ***** 2026-02-14 03:45:20.550181 | orchestrator | 2026-02-14 03:45:20.550192 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-14 03:45:20.550203 | orchestrator | Saturday 14 February 2026 03:44:10 +0000 (0:00:00.072) 0:01:11.144 ***** 2026-02-14 03:45:20.550213 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.550224 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:45:20.550234 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:45:20.550245 | orchestrator | 2026-02-14 03:45:20.550256 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-14 03:45:20.550266 | orchestrator | Saturday 14 February 2026 03:44:58 +0000 (0:00:48.138) 0:01:59.283 ***** 2026-02-14 03:45:20.550277 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.550287 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:45:20.550298 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:45:20.550309 | orchestrator | 2026-02-14 03:45:20.550319 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-14 03:45:20.550330 | orchestrator | Saturday 14 February 2026 03:45:08 +0000 (0:00:10.092) 0:02:09.375 ***** 2026-02-14 03:45:20.550340 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:45:20.550351 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:45:20.550362 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:45:20.550372 | orchestrator | 2026-02-14 03:45:20.550383 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:45:20.550394 | orchestrator | Saturday 14 February 2026 03:45:19 +0000 (0:00:11.325) 0:02:20.701 ***** 2026-02-14 03:45:20.550414 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:46:10.793023 | orchestrator | 2026-02-14 03:46:10.793242 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-14 03:46:10.793274 | orchestrator | Saturday 14 February 2026 03:45:20 +0000 (0:00:00.595) 0:02:21.297 ***** 2026-02-14 03:46:10.793295 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:46:10.793316 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:46:10.793334 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:46:10.793352 | orchestrator | 2026-02-14 03:46:10.793370 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-14 03:46:10.793389 | orchestrator | Saturday 14 February 2026 03:45:21 +0000 (0:00:01.133) 0:02:22.431 ***** 2026-02-14 03:46:10.793520 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:46:10.793541 | orchestrator | 2026-02-14 03:46:10.793561 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-14 03:46:10.793581 | orchestrator | Saturday 14 February 2026 03:45:23 +0000 (0:00:01.818) 0:02:24.250 ***** 2026-02-14 03:46:10.793600 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-14 03:46:10.793613 | orchestrator | 2026-02-14 03:46:10.793626 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-14 03:46:10.793640 | orchestrator | Saturday 14 February 2026 03:45:35 +0000 (0:00:11.738) 0:02:35.988 ***** 2026-02-14 03:46:10.793652 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-14 03:46:10.793665 | orchestrator | 2026-02-14 03:46:10.793677 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-14 03:46:10.793690 | orchestrator | Saturday 14 February 2026 03:45:59 +0000 (0:00:23.897) 0:02:59.886 ***** 2026-02-14 03:46:10.793716 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-14 03:46:10.793759 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-14 03:46:10.793780 | orchestrator | 2026-02-14 03:46:10.793798 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-14 03:46:10.793816 | orchestrator | Saturday 14 February 2026 03:46:05 +0000 (0:00:06.829) 0:03:06.715 ***** 2026-02-14 03:46:10.793834 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:10.793850 | orchestrator | 2026-02-14 03:46:10.793870 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-14 03:46:10.793889 | orchestrator | Saturday 14 February 2026 03:46:06 +0000 (0:00:00.137) 0:03:06.853 ***** 2026-02-14 03:46:10.793908 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:10.793927 | orchestrator | 2026-02-14 03:46:10.793946 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-14 03:46:10.793958 | orchestrator | Saturday 14 February 2026 03:46:06 +0000 (0:00:00.125) 0:03:06.978 ***** 2026-02-14 03:46:10.793969 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:10.793980 | orchestrator | 2026-02-14 03:46:10.794008 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-14 03:46:10.794191 | orchestrator | Saturday 14 February 2026 03:46:06 +0000 (0:00:00.132) 0:03:07.111 ***** 2026-02-14 03:46:10.794215 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:10.794232 | orchestrator | 2026-02-14 03:46:10.794243 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-14 03:46:10.794254 | orchestrator | Saturday 14 February 2026 03:46:06 +0000 (0:00:00.495) 0:03:07.607 ***** 2026-02-14 03:46:10.794265 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:46:10.794277 | orchestrator | 2026-02-14 03:46:10.794287 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-14 03:46:10.794298 | orchestrator | Saturday 14 February 2026 03:46:09 +0000 (0:00:03.091) 0:03:10.698 ***** 2026-02-14 03:46:10.794309 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:10.794320 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:10.794331 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:10.794342 | orchestrator | 2026-02-14 03:46:10.794352 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:46:10.794376 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 03:46:10.794389 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 03:46:10.794400 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 03:46:10.794411 | orchestrator | 2026-02-14 03:46:10.794422 | orchestrator | 2026-02-14 03:46:10.794433 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:46:10.794444 | orchestrator | Saturday 14 February 2026 03:46:10 +0000 (0:00:00.464) 0:03:11.163 ***** 2026-02-14 03:46:10.794455 | orchestrator | =============================================================================== 2026-02-14 03:46:10.794466 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.14s 2026-02-14 03:46:10.794477 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.90s 2026-02-14 03:46:10.794488 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.69s 2026-02-14 03:46:10.794499 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.74s 2026-02-14 03:46:10.794510 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.33s 2026-02-14 03:46:10.794520 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.49s 2026-02-14 03:46:10.794531 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.09s 2026-02-14 03:46:10.794542 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.27s 2026-02-14 03:46:10.794565 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.83s 2026-02-14 03:46:10.794605 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.57s 2026-02-14 03:46:10.794624 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2026-02-14 03:46:10.794642 | orchestrator | keystone : Creating default user role ----------------------------------- 3.09s 2026-02-14 03:46:10.794659 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.00s 2026-02-14 03:46:10.794677 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2026-02-14 03:46:10.794695 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2026-02-14 03:46:10.794714 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2026-02-14 03:46:10.794733 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.19s 2026-02-14 03:46:10.794745 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.82s 2026-02-14 03:46:10.794756 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.80s 2026-02-14 03:46:10.794767 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.43s 2026-02-14 03:46:13.088909 | orchestrator | 2026-02-14 03:46:13 | INFO  | Task 75137ddd-9f5f-481c-a4b9-41fd641b7ff1 (placement) was prepared for execution. 2026-02-14 03:46:13.089004 | orchestrator | 2026-02-14 03:46:13 | INFO  | It takes a moment until task 75137ddd-9f5f-481c-a4b9-41fd641b7ff1 (placement) has been started and output is visible here. 2026-02-14 03:46:47.750630 | orchestrator | 2026-02-14 03:46:47.750767 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:46:47.750785 | orchestrator | 2026-02-14 03:46:47.750797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:46:47.750810 | orchestrator | Saturday 14 February 2026 03:46:17 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-14 03:46:47.750821 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:46:47.750834 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:46:47.750846 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:46:47.750857 | orchestrator | 2026-02-14 03:46:47.750869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:46:47.750880 | orchestrator | Saturday 14 February 2026 03:46:17 +0000 (0:00:00.313) 0:00:00.569 ***** 2026-02-14 03:46:47.750892 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-14 03:46:47.750904 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-14 03:46:47.750915 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-14 03:46:47.750926 | orchestrator | 2026-02-14 03:46:47.750954 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-14 03:46:47.750966 | orchestrator | 2026-02-14 03:46:47.750977 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-14 03:46:47.750988 | orchestrator | Saturday 14 February 2026 03:46:17 +0000 (0:00:00.474) 0:00:01.043 ***** 2026-02-14 03:46:47.751000 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:46:47.751012 | orchestrator | 2026-02-14 03:46:47.751024 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-14 03:46:47.751035 | orchestrator | Saturday 14 February 2026 03:46:18 +0000 (0:00:00.545) 0:00:01.589 ***** 2026-02-14 03:46:47.751046 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-14 03:46:47.751057 | orchestrator | 2026-02-14 03:46:47.751094 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-14 03:46:47.751106 | orchestrator | Saturday 14 February 2026 03:46:22 +0000 (0:00:03.881) 0:00:05.471 ***** 2026-02-14 03:46:47.751117 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-14 03:46:47.751154 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-14 03:46:47.751168 | orchestrator | 2026-02-14 03:46:47.751181 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-14 03:46:47.751193 | orchestrator | Saturday 14 February 2026 03:46:29 +0000 (0:00:06.718) 0:00:12.189 ***** 2026-02-14 03:46:47.751206 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-14 03:46:47.751219 | orchestrator | 2026-02-14 03:46:47.751231 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-14 03:46:47.751244 | orchestrator | Saturday 14 February 2026 03:46:32 +0000 (0:00:03.660) 0:00:15.850 ***** 2026-02-14 03:46:47.751256 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 03:46:47.751268 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-14 03:46:47.751281 | orchestrator | 2026-02-14 03:46:47.751294 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-14 03:46:47.751306 | orchestrator | Saturday 14 February 2026 03:46:36 +0000 (0:00:04.061) 0:00:19.911 ***** 2026-02-14 03:46:47.751320 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 03:46:47.751332 | orchestrator | 2026-02-14 03:46:47.751345 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-14 03:46:47.751357 | orchestrator | Saturday 14 February 2026 03:46:39 +0000 (0:00:03.094) 0:00:23.005 ***** 2026-02-14 03:46:47.751369 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-14 03:46:47.751381 | orchestrator | 2026-02-14 03:46:47.751394 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-14 03:46:47.751406 | orchestrator | Saturday 14 February 2026 03:46:43 +0000 (0:00:03.803) 0:00:26.809 ***** 2026-02-14 03:46:47.751418 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:47.751431 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:47.751444 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:47.751457 | orchestrator | 2026-02-14 03:46:47.751469 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-14 03:46:47.751482 | orchestrator | Saturday 14 February 2026 03:46:44 +0000 (0:00:00.300) 0:00:27.110 ***** 2026-02-14 03:46:47.751498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:47.751555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:47.751579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:47.751591 | orchestrator | 2026-02-14 03:46:47.751603 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-14 03:46:47.751614 | orchestrator | Saturday 14 February 2026 03:46:44 +0000 (0:00:00.814) 0:00:27.924 ***** 2026-02-14 03:46:47.751625 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:47.751637 | orchestrator | 2026-02-14 03:46:47.751648 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-14 03:46:47.751659 | orchestrator | Saturday 14 February 2026 03:46:45 +0000 (0:00:00.314) 0:00:28.238 ***** 2026-02-14 03:46:47.751670 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:47.751681 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:47.751692 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:47.751703 | orchestrator | 2026-02-14 03:46:47.751714 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-14 03:46:47.751725 | orchestrator | Saturday 14 February 2026 03:46:45 +0000 (0:00:00.310) 0:00:28.549 ***** 2026-02-14 03:46:47.751736 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:46:47.751747 | orchestrator | 2026-02-14 03:46:47.751758 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-14 03:46:47.751769 | orchestrator | Saturday 14 February 2026 03:46:46 +0000 (0:00:00.568) 0:00:29.118 ***** 2026-02-14 03:46:47.751781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:47.751803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:50.647740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:50.647892 | orchestrator | 2026-02-14 03:46:50.647924 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-14 03:46:50.647938 | orchestrator | Saturday 14 February 2026 03:46:47 +0000 (0:00:01.672) 0:00:30.790 ***** 2026-02-14 03:46:50.647952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.647964 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:50.647977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.647989 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:50.648000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.648033 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:50.648045 | orchestrator | 2026-02-14 03:46:50.648056 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-14 03:46:50.648143 | orchestrator | Saturday 14 February 2026 03:46:48 +0000 (0:00:00.514) 0:00:31.305 ***** 2026-02-14 03:46:50.648165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.648178 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:50.648190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.648202 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:50.648213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:50.648225 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:50.648238 | orchestrator | 2026-02-14 03:46:50.648251 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-14 03:46:50.648264 | orchestrator | Saturday 14 February 2026 03:46:48 +0000 (0:00:00.716) 0:00:32.021 ***** 2026-02-14 03:46:50.648276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:50.648316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:57.551758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:57.551954 | orchestrator | 2026-02-14 03:46:57.551975 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-14 03:46:57.551989 | orchestrator | Saturday 14 February 2026 03:46:50 +0000 (0:00:01.670) 0:00:33.692 ***** 2026-02-14 03:46:57.552002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:57.552014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:57.552168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:46:57.552189 | orchestrator | 2026-02-14 03:46:57.552200 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-14 03:46:57.552211 | orchestrator | Saturday 14 February 2026 03:46:52 +0000 (0:00:02.288) 0:00:35.980 ***** 2026-02-14 03:46:57.552241 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-14 03:46:57.552254 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-14 03:46:57.552265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-14 03:46:57.552276 | orchestrator | 2026-02-14 03:46:57.552289 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-14 03:46:57.552302 | orchestrator | Saturday 14 February 2026 03:46:54 +0000 (0:00:01.433) 0:00:37.414 ***** 2026-02-14 03:46:57.552315 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:46:57.552328 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:46:57.552340 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:46:57.552353 | orchestrator | 2026-02-14 03:46:57.552365 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-14 03:46:57.552378 | orchestrator | Saturday 14 February 2026 03:46:55 +0000 (0:00:01.315) 0:00:38.729 ***** 2026-02-14 03:46:57.552391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:57.552404 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:46:57.552418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:57.552440 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:46:57.552454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-14 03:46:57.552467 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:46:57.552479 | orchestrator | 2026-02-14 03:46:57.552491 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-14 03:46:57.552510 | orchestrator | Saturday 14 February 2026 03:46:56 +0000 (0:00:00.767) 0:00:39.497 ***** 2026-02-14 03:46:57.552532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:47:26.600780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:47:26.600949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-14 03:47:26.600970 | orchestrator | 2026-02-14 03:47:26.600984 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-14 03:47:26.600997 | orchestrator | Saturday 14 February 2026 03:46:57 +0000 (0:00:01.100) 0:00:40.598 ***** 2026-02-14 03:47:26.601008 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:47:26.601021 | orchestrator | 2026-02-14 03:47:26.601032 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-14 03:47:26.601043 | orchestrator | Saturday 14 February 2026 03:46:59 +0000 (0:00:02.081) 0:00:42.679 ***** 2026-02-14 03:47:26.601054 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:47:26.601065 | orchestrator | 2026-02-14 03:47:26.601132 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-14 03:47:26.601146 | orchestrator | Saturday 14 February 2026 03:47:01 +0000 (0:00:02.264) 0:00:44.944 ***** 2026-02-14 03:47:26.601157 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:47:26.601168 | orchestrator | 2026-02-14 03:47:26.601179 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-14 03:47:26.601190 | orchestrator | Saturday 14 February 2026 03:47:15 +0000 (0:00:14.078) 0:00:59.022 ***** 2026-02-14 03:47:26.601201 | orchestrator | 2026-02-14 03:47:26.601212 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-14 03:47:26.601223 | orchestrator | Saturday 14 February 2026 03:47:16 +0000 (0:00:00.070) 0:00:59.093 ***** 2026-02-14 03:47:26.601234 | orchestrator | 2026-02-14 03:47:26.601245 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-14 03:47:26.601256 | orchestrator | Saturday 14 February 2026 03:47:16 +0000 (0:00:00.067) 0:00:59.160 ***** 2026-02-14 03:47:26.601267 | orchestrator | 2026-02-14 03:47:26.601278 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-14 03:47:26.601291 | orchestrator | Saturday 14 February 2026 03:47:16 +0000 (0:00:00.068) 0:00:59.228 ***** 2026-02-14 03:47:26.601303 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:47:26.601332 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:47:26.601345 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:47:26.601357 | orchestrator | 2026-02-14 03:47:26.601370 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:47:26.601383 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 03:47:26.601397 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:47:26.601409 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 03:47:26.601420 | orchestrator | 2026-02-14 03:47:26.601431 | orchestrator | 2026-02-14 03:47:26.601442 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:47:26.601453 | orchestrator | Saturday 14 February 2026 03:47:26 +0000 (0:00:10.080) 0:01:09.309 ***** 2026-02-14 03:47:26.601474 | orchestrator | =============================================================================== 2026-02-14 03:47:26.601486 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.08s 2026-02-14 03:47:26.601516 | orchestrator | placement : Restart placement-api container ---------------------------- 10.08s 2026-02-14 03:47:26.601528 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.72s 2026-02-14 03:47:26.601539 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.06s 2026-02-14 03:47:26.601551 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.88s 2026-02-14 03:47:26.601562 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.80s 2026-02-14 03:47:26.601573 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.66s 2026-02-14 03:47:26.601584 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.09s 2026-02-14 03:47:26.601595 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.29s 2026-02-14 03:47:26.601606 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.26s 2026-02-14 03:47:26.601617 | orchestrator | placement : Creating placement databases -------------------------------- 2.08s 2026-02-14 03:47:26.601628 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.67s 2026-02-14 03:47:26.601639 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-02-14 03:47:26.601650 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-02-14 03:47:26.601661 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.32s 2026-02-14 03:47:26.601673 | orchestrator | placement : Check placement containers ---------------------------------- 1.10s 2026-02-14 03:47:26.601684 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.81s 2026-02-14 03:47:26.601695 | orchestrator | placement : Copying over existing policy file --------------------------- 0.77s 2026-02-14 03:47:26.601706 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.72s 2026-02-14 03:47:26.601717 | orchestrator | placement : include_tasks ----------------------------------------------- 0.57s 2026-02-14 03:47:28.964455 | orchestrator | 2026-02-14 03:47:28 | INFO  | Task f4a8bc2b-f2eb-4b1a-a935-291bd0d395dd (neutron) was prepared for execution. 2026-02-14 03:47:28.964579 | orchestrator | 2026-02-14 03:47:28 | INFO  | It takes a moment until task f4a8bc2b-f2eb-4b1a-a935-291bd0d395dd (neutron) has been started and output is visible here. 2026-02-14 03:48:17.655699 | orchestrator | 2026-02-14 03:48:17.655830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:48:17.655848 | orchestrator | 2026-02-14 03:48:17.655861 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:48:17.655873 | orchestrator | Saturday 14 February 2026 03:47:33 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-02-14 03:48:17.655884 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:48:17.655896 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:48:17.655908 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:48:17.655919 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:48:17.655930 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:48:17.655941 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:48:17.655952 | orchestrator | 2026-02-14 03:48:17.655963 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:48:17.655975 | orchestrator | Saturday 14 February 2026 03:47:33 +0000 (0:00:00.698) 0:00:00.954 ***** 2026-02-14 03:48:17.655986 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-14 03:48:17.655997 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-14 03:48:17.656008 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-14 03:48:17.656019 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-14 03:48:17.656030 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-14 03:48:17.656067 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-14 03:48:17.656078 | orchestrator | 2026-02-14 03:48:17.656161 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-14 03:48:17.656181 | orchestrator | 2026-02-14 03:48:17.656193 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-14 03:48:17.656204 | orchestrator | Saturday 14 February 2026 03:47:34 +0000 (0:00:00.624) 0:00:01.579 ***** 2026-02-14 03:48:17.656230 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:48:17.656243 | orchestrator | 2026-02-14 03:48:17.656256 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-14 03:48:17.656268 | orchestrator | Saturday 14 February 2026 03:47:35 +0000 (0:00:01.236) 0:00:02.815 ***** 2026-02-14 03:48:17.656280 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:48:17.656292 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:48:17.656305 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:48:17.656317 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:48:17.656329 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:48:17.656341 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:48:17.656354 | orchestrator | 2026-02-14 03:48:17.656367 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-14 03:48:17.656379 | orchestrator | Saturday 14 February 2026 03:47:36 +0000 (0:00:01.276) 0:00:04.092 ***** 2026-02-14 03:48:17.656392 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:48:17.656404 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:48:17.656415 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:48:17.656425 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:48:17.656435 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:48:17.656446 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:48:17.656456 | orchestrator | 2026-02-14 03:48:17.656467 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-14 03:48:17.656478 | orchestrator | Saturday 14 February 2026 03:47:37 +0000 (0:00:01.051) 0:00:05.143 ***** 2026-02-14 03:48:17.656488 | orchestrator | ok: [testbed-node-0] => { 2026-02-14 03:48:17.656500 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656511 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656522 | orchestrator | } 2026-02-14 03:48:17.656532 | orchestrator | ok: [testbed-node-1] => { 2026-02-14 03:48:17.656543 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656554 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656564 | orchestrator | } 2026-02-14 03:48:17.656575 | orchestrator | ok: [testbed-node-2] => { 2026-02-14 03:48:17.656585 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656595 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656606 | orchestrator | } 2026-02-14 03:48:17.656617 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 03:48:17.656627 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656638 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656648 | orchestrator | } 2026-02-14 03:48:17.656659 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 03:48:17.656669 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656681 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656692 | orchestrator | } 2026-02-14 03:48:17.656702 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 03:48:17.656713 | orchestrator |  "changed": false, 2026-02-14 03:48:17.656723 | orchestrator |  "msg": "All assertions passed" 2026-02-14 03:48:17.656734 | orchestrator | } 2026-02-14 03:48:17.656745 | orchestrator | 2026-02-14 03:48:17.656756 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-14 03:48:17.656766 | orchestrator | Saturday 14 February 2026 03:47:38 +0000 (0:00:00.830) 0:00:05.973 ***** 2026-02-14 03:48:17.656777 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:17.656788 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:17.656798 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:17.656818 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:17.656829 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:17.656839 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:17.656850 | orchestrator | 2026-02-14 03:48:17.656860 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-14 03:48:17.656871 | orchestrator | Saturday 14 February 2026 03:47:39 +0000 (0:00:00.617) 0:00:06.591 ***** 2026-02-14 03:48:17.656882 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-14 03:48:17.656893 | orchestrator | 2026-02-14 03:48:17.656903 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-14 03:48:17.656914 | orchestrator | Saturday 14 February 2026 03:47:43 +0000 (0:00:04.084) 0:00:10.676 ***** 2026-02-14 03:48:17.656925 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-14 03:48:17.656937 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-14 03:48:17.656948 | orchestrator | 2026-02-14 03:48:17.656977 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-14 03:48:17.656988 | orchestrator | Saturday 14 February 2026 03:47:50 +0000 (0:00:06.571) 0:00:17.248 ***** 2026-02-14 03:48:17.656999 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 03:48:17.657009 | orchestrator | 2026-02-14 03:48:17.657020 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-14 03:48:17.657030 | orchestrator | Saturday 14 February 2026 03:47:53 +0000 (0:00:03.130) 0:00:20.379 ***** 2026-02-14 03:48:17.657041 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 03:48:17.657052 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-14 03:48:17.657063 | orchestrator | 2026-02-14 03:48:17.657073 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-14 03:48:17.657084 | orchestrator | Saturday 14 February 2026 03:47:57 +0000 (0:00:04.202) 0:00:24.582 ***** 2026-02-14 03:48:17.657119 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 03:48:17.657130 | orchestrator | 2026-02-14 03:48:17.657141 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-14 03:48:17.657152 | orchestrator | Saturday 14 February 2026 03:48:00 +0000 (0:00:03.178) 0:00:27.761 ***** 2026-02-14 03:48:17.657162 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-14 03:48:17.657173 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-14 03:48:17.657183 | orchestrator | 2026-02-14 03:48:17.657194 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-14 03:48:17.657205 | orchestrator | Saturday 14 February 2026 03:48:08 +0000 (0:00:07.733) 0:00:35.494 ***** 2026-02-14 03:48:17.657215 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:17.657226 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:17.657237 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:17.657247 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:17.657258 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:17.657275 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:17.657286 | orchestrator | 2026-02-14 03:48:17.657297 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-14 03:48:17.657307 | orchestrator | Saturday 14 February 2026 03:48:09 +0000 (0:00:00.761) 0:00:36.256 ***** 2026-02-14 03:48:17.657318 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:17.657329 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:17.657339 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:17.657350 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:17.657360 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:17.657371 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:17.657381 | orchestrator | 2026-02-14 03:48:17.657392 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-14 03:48:17.657403 | orchestrator | Saturday 14 February 2026 03:48:11 +0000 (0:00:02.031) 0:00:38.287 ***** 2026-02-14 03:48:17.657420 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:48:17.657431 | orchestrator | ok: [testbed-node-1] 2026-02-14 03:48:17.657441 | orchestrator | ok: [testbed-node-2] 2026-02-14 03:48:17.657452 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:48:17.657462 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:48:17.657473 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:48:17.657484 | orchestrator | 2026-02-14 03:48:17.657494 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-14 03:48:17.657505 | orchestrator | Saturday 14 February 2026 03:48:13 +0000 (0:00:01.979) 0:00:40.267 ***** 2026-02-14 03:48:17.657516 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:17.657526 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:17.657537 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:17.657547 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:17.657558 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:17.657569 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:17.657579 | orchestrator | 2026-02-14 03:48:17.657590 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-14 03:48:17.657601 | orchestrator | Saturday 14 February 2026 03:48:15 +0000 (0:00:02.171) 0:00:42.438 ***** 2026-02-14 03:48:17.657615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:17.657641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:23.044396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:23.044527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:23.044565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:23.044580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:23.044592 | orchestrator | 2026-02-14 03:48:23.044606 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-14 03:48:23.044619 | orchestrator | Saturday 14 February 2026 03:48:17 +0000 (0:00:02.362) 0:00:44.801 ***** 2026-02-14 03:48:23.044630 | orchestrator | [WARNING]: Skipped 2026-02-14 03:48:23.044643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-14 03:48:23.044655 | orchestrator | due to this access issue: 2026-02-14 03:48:23.044667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-14 03:48:23.044678 | orchestrator | a directory 2026-02-14 03:48:23.044690 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:48:23.044700 | orchestrator | 2026-02-14 03:48:23.044712 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-14 03:48:23.044723 | orchestrator | Saturday 14 February 2026 03:48:18 +0000 (0:00:00.882) 0:00:45.684 ***** 2026-02-14 03:48:23.044751 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:48:23.044764 | orchestrator | 2026-02-14 03:48:23.044775 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-14 03:48:23.044786 | orchestrator | Saturday 14 February 2026 03:48:19 +0000 (0:00:01.334) 0:00:47.018 ***** 2026-02-14 03:48:23.044803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:23.044825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:23.044837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:23.044849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:23.044869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:27.764068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:27.764236 | orchestrator | 2026-02-14 03:48:27.764255 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-14 03:48:27.764268 | orchestrator | Saturday 14 February 2026 03:48:23 +0000 (0:00:03.167) 0:00:50.186 ***** 2026-02-14 03:48:27.764282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:27.764295 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:27.764308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:27.764320 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:27.764331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:27.764342 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:27.764399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:27.764421 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:27.764450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:27.764469 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:27.764481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:27.764492 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:27.764510 | orchestrator | 2026-02-14 03:48:27.764529 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-14 03:48:27.764549 | orchestrator | Saturday 14 February 2026 03:48:25 +0000 (0:00:01.977) 0:00:52.163 ***** 2026-02-14 03:48:27.764567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:27.764586 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:27.764616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:33.047849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:33.047933 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:33.047944 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:33.047953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:33.047960 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:33.047967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:33.047974 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:33.047981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:33.048004 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:33.048010 | orchestrator | 2026-02-14 03:48:33.048018 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-14 03:48:33.048026 | orchestrator | Saturday 14 February 2026 03:48:27 +0000 (0:00:02.746) 0:00:54.909 ***** 2026-02-14 03:48:33.048032 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:33.048038 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:33.048044 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:33.048050 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:33.048056 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:33.048076 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:33.048082 | orchestrator | 2026-02-14 03:48:33.048088 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-14 03:48:33.048136 | orchestrator | Saturday 14 February 2026 03:48:29 +0000 (0:00:02.245) 0:00:57.154 ***** 2026-02-14 03:48:33.048143 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:33.048150 | orchestrator | 2026-02-14 03:48:33.048156 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-14 03:48:33.048174 | orchestrator | Saturday 14 February 2026 03:48:30 +0000 (0:00:00.140) 0:00:57.295 ***** 2026-02-14 03:48:33.048181 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:33.048187 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:33.048193 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:33.048199 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:33.048205 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:33.048211 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:33.048218 | orchestrator | 2026-02-14 03:48:33.048224 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-14 03:48:33.048230 | orchestrator | Saturday 14 February 2026 03:48:30 +0000 (0:00:00.603) 0:00:57.898 ***** 2026-02-14 03:48:33.048242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:33.048249 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:33.048255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:33.048262 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:33.048273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:33.048279 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:33.048286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:33.048292 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:33.048307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:41.146573 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:41.146693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:41.146721 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:41.146734 | orchestrator | 2026-02-14 03:48:41.146745 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-14 03:48:41.146756 | orchestrator | Saturday 14 February 2026 03:48:33 +0000 (0:00:02.294) 0:01:00.192 ***** 2026-02-14 03:48:41.146768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:41.146804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:41.146815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:41.146859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:41.146871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:41.146889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:41.146899 | orchestrator | 2026-02-14 03:48:41.146909 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-14 03:48:41.146919 | orchestrator | Saturday 14 February 2026 03:48:36 +0000 (0:00:03.071) 0:01:03.263 ***** 2026-02-14 03:48:41.146929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:41.146939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:41.146962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:48:45.884676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:45.884819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:45.884837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:48:45.884850 | orchestrator | 2026-02-14 03:48:45.884864 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-14 03:48:45.884876 | orchestrator | Saturday 14 February 2026 03:48:41 +0000 (0:00:05.030) 0:01:08.293 ***** 2026-02-14 03:48:45.884888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:45.884917 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:48:45.884964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:45.884985 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:48:45.884996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:48:45.885008 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:48:45.885019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:45.885031 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:45.885042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:45.885053 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:45.885070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:48:45.885082 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:45.885093 | orchestrator | 2026-02-14 03:48:45.885162 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-14 03:48:45.885187 | orchestrator | Saturday 14 February 2026 03:48:43 +0000 (0:00:02.068) 0:01:10.362 ***** 2026-02-14 03:48:45.885198 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:48:45.885209 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:48:45.885220 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:48:45.885231 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:48:45.885242 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:48:45.885253 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:48:45.885264 | orchestrator | 2026-02-14 03:48:45.885275 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-14 03:48:45.885295 | orchestrator | Saturday 14 February 2026 03:48:45 +0000 (0:00:02.665) 0:01:13.028 ***** 2026-02-14 03:49:04.491560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:04.491682 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.491702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:04.491715 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.491727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:04.491757 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.491770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:04.491856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:04.491872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:04.491883 | orchestrator | 2026-02-14 03:49:04.491896 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-14 03:49:04.491908 | orchestrator | Saturday 14 February 2026 03:48:49 +0000 (0:00:03.507) 0:01:16.535 ***** 2026-02-14 03:49:04.491919 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.491930 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.491941 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.491952 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.491963 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.491974 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.491986 | orchestrator | 2026-02-14 03:49:04.491997 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-14 03:49:04.492008 | orchestrator | Saturday 14 February 2026 03:48:51 +0000 (0:00:02.313) 0:01:18.848 ***** 2026-02-14 03:49:04.492019 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492030 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492041 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492052 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.492062 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492073 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.492085 | orchestrator | 2026-02-14 03:49:04.492099 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-14 03:49:04.492134 | orchestrator | Saturday 14 February 2026 03:48:53 +0000 (0:00:02.082) 0:01:20.931 ***** 2026-02-14 03:49:04.492146 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492160 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492173 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492185 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.492197 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.492210 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492222 | orchestrator | 2026-02-14 03:49:04.492234 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-14 03:49:04.492256 | orchestrator | Saturday 14 February 2026 03:48:55 +0000 (0:00:02.118) 0:01:23.050 ***** 2026-02-14 03:49:04.492269 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492282 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492294 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492306 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.492318 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492331 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.492342 | orchestrator | 2026-02-14 03:49:04.492355 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-14 03:49:04.492368 | orchestrator | Saturday 14 February 2026 03:48:57 +0000 (0:00:02.090) 0:01:25.140 ***** 2026-02-14 03:49:04.492380 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492392 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492404 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492416 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.492428 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.492440 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492452 | orchestrator | 2026-02-14 03:49:04.492463 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-14 03:49:04.492474 | orchestrator | Saturday 14 February 2026 03:49:00 +0000 (0:00:02.185) 0:01:27.326 ***** 2026-02-14 03:49:04.492484 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492495 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492506 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492517 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:04.492533 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:04.492545 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492556 | orchestrator | 2026-02-14 03:49:04.492567 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-14 03:49:04.492578 | orchestrator | Saturday 14 February 2026 03:49:02 +0000 (0:00:02.090) 0:01:29.417 ***** 2026-02-14 03:49:04.492589 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:04.492600 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:04.492611 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:04.492622 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:04.492633 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:04.492644 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:04.492655 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:04.492666 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:04.492684 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:08.543394 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:08.543498 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-14 03:49:08.543513 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:08.543524 | orchestrator | 2026-02-14 03:49:08.543536 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-14 03:49:08.543548 | orchestrator | Saturday 14 February 2026 03:49:04 +0000 (0:00:02.202) 0:01:31.619 ***** 2026-02-14 03:49:08.543562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:08.543600 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:08.543613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:08.543625 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:08.543636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:08.543648 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:08.543675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:08.543688 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:08.543718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:08.543738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:08.543749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:08.543761 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:08.543772 | orchestrator | 2026-02-14 03:49:08.543783 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-14 03:49:08.543794 | orchestrator | Saturday 14 February 2026 03:49:06 +0000 (0:00:02.075) 0:01:33.695 ***** 2026-02-14 03:49:08.543805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:08.543816 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:08.543832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:08.543844 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:08.543864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:33.901714 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.901833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:33.901854 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.901867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:33.901879 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.901890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:33.901901 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.901913 | orchestrator | 2026-02-14 03:49:33.901925 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-14 03:49:33.901937 | orchestrator | Saturday 14 February 2026 03:49:08 +0000 (0:00:01.992) 0:01:35.688 ***** 2026-02-14 03:49:33.901952 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.901971 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.901988 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.902007 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.902108 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.902197 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.902217 | orchestrator | 2026-02-14 03:49:33.902258 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-14 03:49:33.902282 | orchestrator | Saturday 14 February 2026 03:49:10 +0000 (0:00:02.203) 0:01:37.892 ***** 2026-02-14 03:49:33.902305 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.902324 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.902343 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.902362 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:49:33.902380 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:49:33.902398 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:49:33.902417 | orchestrator | 2026-02-14 03:49:33.902436 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-14 03:49:33.902488 | orchestrator | Saturday 14 February 2026 03:49:14 +0000 (0:00:03.654) 0:01:41.546 ***** 2026-02-14 03:49:33.902508 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.902527 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.902546 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.902565 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.902585 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.902602 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.902621 | orchestrator | 2026-02-14 03:49:33.902640 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-14 03:49:33.902660 | orchestrator | Saturday 14 February 2026 03:49:16 +0000 (0:00:02.175) 0:01:43.721 ***** 2026-02-14 03:49:33.902679 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.902699 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.902718 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.902738 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.902757 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.902775 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.902795 | orchestrator | 2026-02-14 03:49:33.902816 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-14 03:49:33.902864 | orchestrator | Saturday 14 February 2026 03:49:18 +0000 (0:00:02.181) 0:01:45.903 ***** 2026-02-14 03:49:33.902885 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.902905 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.902926 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.902946 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.902967 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.902986 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903006 | orchestrator | 2026-02-14 03:49:33.903027 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-14 03:49:33.903048 | orchestrator | Saturday 14 February 2026 03:49:20 +0000 (0:00:02.236) 0:01:48.140 ***** 2026-02-14 03:49:33.903068 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.903088 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.903109 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.903159 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.903179 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903199 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.903218 | orchestrator | 2026-02-14 03:49:33.903236 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-14 03:49:33.903255 | orchestrator | Saturday 14 February 2026 03:49:23 +0000 (0:00:02.139) 0:01:50.280 ***** 2026-02-14 03:49:33.903273 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.903290 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.903309 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.903327 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.903344 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903362 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.903379 | orchestrator | 2026-02-14 03:49:33.903397 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-14 03:49:33.903417 | orchestrator | Saturday 14 February 2026 03:49:25 +0000 (0:00:02.156) 0:01:52.436 ***** 2026-02-14 03:49:33.903435 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.903453 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.903471 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.903490 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903509 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.903528 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.903548 | orchestrator | 2026-02-14 03:49:33.903567 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-14 03:49:33.903586 | orchestrator | Saturday 14 February 2026 03:49:27 +0000 (0:00:02.192) 0:01:54.629 ***** 2026-02-14 03:49:33.903604 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.903643 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.903663 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.903682 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.903700 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.903717 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903733 | orchestrator | 2026-02-14 03:49:33.903744 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-14 03:49:33.903755 | orchestrator | Saturday 14 February 2026 03:49:29 +0000 (0:00:02.222) 0:01:56.851 ***** 2026-02-14 03:49:33.903766 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903778 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.903789 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903800 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:33.903810 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903821 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:33.903832 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903843 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:33.903854 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903865 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:33.903880 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-14 03:49:33.903911 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:33.903928 | orchestrator | 2026-02-14 03:49:33.903946 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-14 03:49:33.903963 | orchestrator | Saturday 14 February 2026 03:49:31 +0000 (0:00:01.839) 0:01:58.691 ***** 2026-02-14 03:49:33.903986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:33.904008 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:49:33.904050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:36.401827 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:49:36.401937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-14 03:49:36.401952 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:49:36.401963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:36.401974 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:49:36.401995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:36.402005 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:49:36.402014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 03:49:36.402078 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:49:36.402088 | orchestrator | 2026-02-14 03:49:36.402097 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-14 03:49:36.402108 | orchestrator | Saturday 14 February 2026 03:49:33 +0000 (0:00:02.353) 0:02:01.044 ***** 2026-02-14 03:49:36.402169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:36.402190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:36.402205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-14 03:49:36.402215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:49:36.402224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:49:36.402245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-14 03:52:01.103907 | orchestrator | 2026-02-14 03:52:01.104014 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-14 03:52:01.104028 | orchestrator | Saturday 14 February 2026 03:49:36 +0000 (0:00:02.503) 0:02:03.548 ***** 2026-02-14 03:52:01.104038 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:52:01.104048 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:52:01.104057 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:52:01.104066 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:52:01.104075 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:52:01.104084 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:52:01.104092 | orchestrator | 2026-02-14 03:52:01.104101 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-14 03:52:01.104110 | orchestrator | Saturday 14 February 2026 03:49:37 +0000 (0:00:00.746) 0:02:04.294 ***** 2026-02-14 03:52:01.104119 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:52:01.104128 | orchestrator | 2026-02-14 03:52:01.104137 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-14 03:52:01.104145 | orchestrator | Saturday 14 February 2026 03:49:39 +0000 (0:00:02.066) 0:02:06.360 ***** 2026-02-14 03:52:01.104154 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:52:01.104163 | orchestrator | 2026-02-14 03:52:01.104172 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-14 03:52:01.104180 | orchestrator | Saturday 14 February 2026 03:49:41 +0000 (0:00:02.228) 0:02:08.589 ***** 2026-02-14 03:52:01.104227 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:52:01.104236 | orchestrator | 2026-02-14 03:52:01.104244 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104254 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:45.088) 0:02:53.677 ***** 2026-02-14 03:52:01.104262 | orchestrator | 2026-02-14 03:52:01.104271 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104280 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.069) 0:02:53.747 ***** 2026-02-14 03:52:01.104288 | orchestrator | 2026-02-14 03:52:01.104297 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104305 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.086) 0:02:53.834 ***** 2026-02-14 03:52:01.104314 | orchestrator | 2026-02-14 03:52:01.104323 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104331 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.069) 0:02:53.903 ***** 2026-02-14 03:52:01.104340 | orchestrator | 2026-02-14 03:52:01.104362 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104371 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.071) 0:02:53.975 ***** 2026-02-14 03:52:01.104380 | orchestrator | 2026-02-14 03:52:01.104389 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-14 03:52:01.104397 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.067) 0:02:54.043 ***** 2026-02-14 03:52:01.104406 | orchestrator | 2026-02-14 03:52:01.104414 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-14 03:52:01.104423 | orchestrator | Saturday 14 February 2026 03:50:26 +0000 (0:00:00.070) 0:02:54.114 ***** 2026-02-14 03:52:01.104452 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:52:01.104463 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:52:01.104474 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:52:01.104485 | orchestrator | 2026-02-14 03:52:01.104495 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-14 03:52:01.104504 | orchestrator | Saturday 14 February 2026 03:50:56 +0000 (0:00:29.693) 0:03:23.807 ***** 2026-02-14 03:52:01.104514 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:52:01.104525 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:52:01.104534 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:52:01.104544 | orchestrator | 2026-02-14 03:52:01.104554 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 03:52:01.104566 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:52:01.104577 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-14 03:52:01.104587 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-14 03:52:01.104597 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:52:01.104607 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:52:01.104618 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-14 03:52:01.104627 | orchestrator | 2026-02-14 03:52:01.104638 | orchestrator | 2026-02-14 03:52:01.104648 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 03:52:01.104658 | orchestrator | Saturday 14 February 2026 03:52:00 +0000 (0:01:03.980) 0:04:27.788 ***** 2026-02-14 03:52:01.104668 | orchestrator | =============================================================================== 2026-02-14 03:52:01.104678 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.98s 2026-02-14 03:52:01.104688 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.09s 2026-02-14 03:52:01.104698 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.69s 2026-02-14 03:52:01.104723 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.73s 2026-02-14 03:52:01.104735 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.57s 2026-02-14 03:52:01.104745 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.03s 2026-02-14 03:52:01.104755 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.20s 2026-02-14 03:52:01.104765 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.08s 2026-02-14 03:52:01.104775 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.65s 2026-02-14 03:52:01.104785 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.51s 2026-02-14 03:52:01.104795 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.18s 2026-02-14 03:52:01.104805 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.17s 2026-02-14 03:52:01.104815 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.13s 2026-02-14 03:52:01.104824 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.07s 2026-02-14 03:52:01.104833 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.75s 2026-02-14 03:52:01.104842 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.67s 2026-02-14 03:52:01.104857 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.50s 2026-02-14 03:52:01.104866 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.36s 2026-02-14 03:52:01.104874 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.35s 2026-02-14 03:52:01.104883 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.31s 2026-02-14 03:52:03.416700 | orchestrator | 2026-02-14 03:52:03 | INFO  | Task f1b48461-f807-419d-aa44-af42a25165b6 (nova) was prepared for execution. 2026-02-14 03:52:03.416825 | orchestrator | 2026-02-14 03:52:03 | INFO  | It takes a moment until task f1b48461-f807-419d-aa44-af42a25165b6 (nova) has been started and output is visible here. 2026-02-14 03:53:59.626880 | orchestrator | 2026-02-14 03:53:59.627017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 03:53:59.627035 | orchestrator | 2026-02-14 03:53:59.627047 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-14 03:53:59.627058 | orchestrator | Saturday 14 February 2026 03:52:07 +0000 (0:00:00.276) 0:00:00.276 ***** 2026-02-14 03:53:59.627069 | orchestrator | changed: [testbed-manager] 2026-02-14 03:53:59.627081 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627092 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:53:59.627103 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:53:59.627114 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:53:59.627125 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:53:59.627135 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:53:59.627146 | orchestrator | 2026-02-14 03:53:59.627157 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 03:53:59.627168 | orchestrator | Saturday 14 February 2026 03:52:08 +0000 (0:00:00.837) 0:00:01.114 ***** 2026-02-14 03:53:59.627178 | orchestrator | changed: [testbed-manager] 2026-02-14 03:53:59.627189 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627199 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:53:59.627210 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:53:59.627221 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:53:59.627231 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:53:59.627242 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:53:59.627287 | orchestrator | 2026-02-14 03:53:59.627308 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 03:53:59.627326 | orchestrator | Saturday 14 February 2026 03:52:09 +0000 (0:00:00.861) 0:00:01.976 ***** 2026-02-14 03:53:59.627345 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-14 03:53:59.627363 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-14 03:53:59.627381 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-14 03:53:59.627398 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-14 03:53:59.627417 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-14 03:53:59.627436 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-14 03:53:59.627455 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-14 03:53:59.627474 | orchestrator | 2026-02-14 03:53:59.627494 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-14 03:53:59.627514 | orchestrator | 2026-02-14 03:53:59.627533 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-14 03:53:59.627550 | orchestrator | Saturday 14 February 2026 03:52:10 +0000 (0:00:00.751) 0:00:02.727 ***** 2026-02-14 03:53:59.627564 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:53:59.627576 | orchestrator | 2026-02-14 03:53:59.627589 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-14 03:53:59.627601 | orchestrator | Saturday 14 February 2026 03:52:10 +0000 (0:00:00.780) 0:00:03.508 ***** 2026-02-14 03:53:59.627614 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-14 03:53:59.627652 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-14 03:53:59.627665 | orchestrator | 2026-02-14 03:53:59.627678 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-14 03:53:59.627690 | orchestrator | Saturday 14 February 2026 03:52:15 +0000 (0:00:04.253) 0:00:07.762 ***** 2026-02-14 03:53:59.627702 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:53:59.627715 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 03:53:59.627728 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627740 | orchestrator | 2026-02-14 03:53:59.627752 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-14 03:53:59.627765 | orchestrator | Saturday 14 February 2026 03:52:19 +0000 (0:00:04.105) 0:00:11.867 ***** 2026-02-14 03:53:59.627777 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627789 | orchestrator | 2026-02-14 03:53:59.627800 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-14 03:53:59.627811 | orchestrator | Saturday 14 February 2026 03:52:19 +0000 (0:00:00.647) 0:00:12.515 ***** 2026-02-14 03:53:59.627821 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627832 | orchestrator | 2026-02-14 03:53:59.627843 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-14 03:53:59.627854 | orchestrator | Saturday 14 February 2026 03:52:21 +0000 (0:00:01.264) 0:00:13.779 ***** 2026-02-14 03:53:59.627864 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.627875 | orchestrator | 2026-02-14 03:53:59.627885 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-14 03:53:59.627896 | orchestrator | Saturday 14 February 2026 03:52:23 +0000 (0:00:02.612) 0:00:16.391 ***** 2026-02-14 03:53:59.627907 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.627917 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.627928 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.627939 | orchestrator | 2026-02-14 03:53:59.627949 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-14 03:53:59.627960 | orchestrator | Saturday 14 February 2026 03:52:24 +0000 (0:00:00.306) 0:00:16.697 ***** 2026-02-14 03:53:59.627971 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:53:59.627982 | orchestrator | 2026-02-14 03:53:59.627993 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-14 03:53:59.628003 | orchestrator | Saturday 14 February 2026 03:52:55 +0000 (0:00:31.283) 0:00:47.981 ***** 2026-02-14 03:53:59.628014 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.628024 | orchestrator | 2026-02-14 03:53:59.628035 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-14 03:53:59.628045 | orchestrator | Saturday 14 February 2026 03:53:09 +0000 (0:00:14.241) 0:01:02.223 ***** 2026-02-14 03:53:59.628056 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:53:59.628067 | orchestrator | 2026-02-14 03:53:59.628078 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-14 03:53:59.628088 | orchestrator | Saturday 14 February 2026 03:53:21 +0000 (0:00:11.954) 0:01:14.177 ***** 2026-02-14 03:53:59.628120 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:53:59.628132 | orchestrator | 2026-02-14 03:53:59.628151 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-14 03:53:59.628162 | orchestrator | Saturday 14 February 2026 03:53:22 +0000 (0:00:00.662) 0:01:14.839 ***** 2026-02-14 03:53:59.628173 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.628184 | orchestrator | 2026-02-14 03:53:59.628195 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-14 03:53:59.628206 | orchestrator | Saturday 14 February 2026 03:53:22 +0000 (0:00:00.456) 0:01:15.295 ***** 2026-02-14 03:53:59.628217 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:53:59.628228 | orchestrator | 2026-02-14 03:53:59.628239 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-14 03:53:59.628298 | orchestrator | Saturday 14 February 2026 03:53:23 +0000 (0:00:00.683) 0:01:15.979 ***** 2026-02-14 03:53:59.628310 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:53:59.628321 | orchestrator | 2026-02-14 03:53:59.628332 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-14 03:53:59.628343 | orchestrator | Saturday 14 February 2026 03:53:41 +0000 (0:00:17.798) 0:01:33.778 ***** 2026-02-14 03:53:59.628354 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.628364 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628375 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628386 | orchestrator | 2026-02-14 03:53:59.628396 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-14 03:53:59.628407 | orchestrator | 2026-02-14 03:53:59.628418 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-14 03:53:59.628429 | orchestrator | Saturday 14 February 2026 03:53:41 +0000 (0:00:00.313) 0:01:34.092 ***** 2026-02-14 03:53:59.628439 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:53:59.628450 | orchestrator | 2026-02-14 03:53:59.628461 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-14 03:53:59.628472 | orchestrator | Saturday 14 February 2026 03:53:42 +0000 (0:00:00.769) 0:01:34.861 ***** 2026-02-14 03:53:59.628482 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628493 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628504 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.628515 | orchestrator | 2026-02-14 03:53:59.628525 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-14 03:53:59.628536 | orchestrator | Saturday 14 February 2026 03:53:44 +0000 (0:00:02.007) 0:01:36.869 ***** 2026-02-14 03:53:59.628547 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628558 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628568 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.628579 | orchestrator | 2026-02-14 03:53:59.628590 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-14 03:53:59.628600 | orchestrator | Saturday 14 February 2026 03:53:46 +0000 (0:00:02.084) 0:01:38.954 ***** 2026-02-14 03:53:59.628611 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.628622 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628632 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628643 | orchestrator | 2026-02-14 03:53:59.628654 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-14 03:53:59.628664 | orchestrator | Saturday 14 February 2026 03:53:46 +0000 (0:00:00.513) 0:01:39.467 ***** 2026-02-14 03:53:59.628675 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 03:53:59.628686 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628697 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 03:53:59.628708 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628718 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 03:53:59.628729 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-14 03:53:59.628740 | orchestrator | 2026-02-14 03:53:59.628751 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-14 03:53:59.628762 | orchestrator | Saturday 14 February 2026 03:53:54 +0000 (0:00:07.571) 0:01:47.039 ***** 2026-02-14 03:53:59.628773 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.628783 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628794 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628805 | orchestrator | 2026-02-14 03:53:59.628816 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-14 03:53:59.628827 | orchestrator | Saturday 14 February 2026 03:53:54 +0000 (0:00:00.333) 0:01:47.373 ***** 2026-02-14 03:53:59.628837 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-14 03:53:59.628848 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:53:59.628859 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 03:53:59.628876 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628887 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 03:53:59.628898 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628909 | orchestrator | 2026-02-14 03:53:59.628919 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-14 03:53:59.628930 | orchestrator | Saturday 14 February 2026 03:53:55 +0000 (0:00:01.091) 0:01:48.464 ***** 2026-02-14 03:53:59.628941 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.628952 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.628962 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.628973 | orchestrator | 2026-02-14 03:53:59.628984 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-14 03:53:59.628995 | orchestrator | Saturday 14 February 2026 03:53:56 +0000 (0:00:00.459) 0:01:48.924 ***** 2026-02-14 03:53:59.629005 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.629016 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.629027 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:53:59.629038 | orchestrator | 2026-02-14 03:53:59.629049 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-14 03:53:59.629059 | orchestrator | Saturday 14 February 2026 03:53:57 +0000 (0:00:00.923) 0:01:49.848 ***** 2026-02-14 03:53:59.629070 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:53:59.629081 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:53:59.629099 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:55:16.907686 | orchestrator | 2026-02-14 03:55:16.907789 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-14 03:55:16.907802 | orchestrator | Saturday 14 February 2026 03:53:59 +0000 (0:00:02.358) 0:01:52.206 ***** 2026-02-14 03:55:16.907812 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.907822 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.907831 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:55:16.907840 | orchestrator | 2026-02-14 03:55:16.907850 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-14 03:55:16.907858 | orchestrator | Saturday 14 February 2026 03:54:20 +0000 (0:00:20.991) 0:02:13.197 ***** 2026-02-14 03:55:16.907867 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.907876 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.907885 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:55:16.907894 | orchestrator | 2026-02-14 03:55:16.907903 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-14 03:55:16.907912 | orchestrator | Saturday 14 February 2026 03:54:31 +0000 (0:00:11.102) 0:02:24.300 ***** 2026-02-14 03:55:16.907920 | orchestrator | ok: [testbed-node-0] 2026-02-14 03:55:16.907929 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.907938 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.907947 | orchestrator | 2026-02-14 03:55:16.907955 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-14 03:55:16.907964 | orchestrator | Saturday 14 February 2026 03:54:32 +0000 (0:00:01.053) 0:02:25.354 ***** 2026-02-14 03:55:16.907973 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.907982 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.907991 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:55:16.908000 | orchestrator | 2026-02-14 03:55:16.908009 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-14 03:55:16.908018 | orchestrator | Saturday 14 February 2026 03:54:46 +0000 (0:00:13.458) 0:02:38.812 ***** 2026-02-14 03:55:16.908026 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:16.908035 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.908044 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.908063 | orchestrator | 2026-02-14 03:55:16.908072 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-14 03:55:16.908081 | orchestrator | Saturday 14 February 2026 03:54:47 +0000 (0:00:01.086) 0:02:39.899 ***** 2026-02-14 03:55:16.908111 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:16.908120 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:16.908129 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:16.908138 | orchestrator | 2026-02-14 03:55:16.908146 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-14 03:55:16.908155 | orchestrator | 2026-02-14 03:55:16.908164 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-14 03:55:16.908172 | orchestrator | Saturday 14 February 2026 03:54:47 +0000 (0:00:00.307) 0:02:40.206 ***** 2026-02-14 03:55:16.908181 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:55:16.908191 | orchestrator | 2026-02-14 03:55:16.908200 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-14 03:55:16.908208 | orchestrator | Saturday 14 February 2026 03:54:48 +0000 (0:00:00.753) 0:02:40.959 ***** 2026-02-14 03:55:16.908217 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-14 03:55:16.908226 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-14 03:55:16.908234 | orchestrator | 2026-02-14 03:55:16.908243 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-14 03:55:16.908252 | orchestrator | Saturday 14 February 2026 03:54:51 +0000 (0:00:03.459) 0:02:44.419 ***** 2026-02-14 03:55:16.908261 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-14 03:55:16.908348 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-14 03:55:16.908360 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-14 03:55:16.908369 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-14 03:55:16.908379 | orchestrator | 2026-02-14 03:55:16.908388 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-14 03:55:16.908397 | orchestrator | Saturday 14 February 2026 03:54:58 +0000 (0:00:06.543) 0:02:50.962 ***** 2026-02-14 03:55:16.908406 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 03:55:16.908415 | orchestrator | 2026-02-14 03:55:16.908423 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-14 03:55:16.908432 | orchestrator | Saturday 14 February 2026 03:55:01 +0000 (0:00:03.093) 0:02:54.056 ***** 2026-02-14 03:55:16.908441 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 03:55:16.908449 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-14 03:55:16.908458 | orchestrator | 2026-02-14 03:55:16.908467 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-14 03:55:16.908476 | orchestrator | Saturday 14 February 2026 03:55:05 +0000 (0:00:03.868) 0:02:57.924 ***** 2026-02-14 03:55:16.908484 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 03:55:16.908493 | orchestrator | 2026-02-14 03:55:16.908502 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-14 03:55:16.908510 | orchestrator | Saturday 14 February 2026 03:55:08 +0000 (0:00:03.062) 0:03:00.987 ***** 2026-02-14 03:55:16.908519 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-14 03:55:16.908538 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-14 03:55:16.908547 | orchestrator | 2026-02-14 03:55:16.908556 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-14 03:55:16.908585 | orchestrator | Saturday 14 February 2026 03:55:15 +0000 (0:00:07.224) 0:03:08.211 ***** 2026-02-14 03:55:16.908600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:16.908625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:16.908638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:16.908660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.426749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.426889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.426917 | orchestrator | 2026-02-14 03:55:21.426938 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-14 03:55:21.426959 | orchestrator | Saturday 14 February 2026 03:55:16 +0000 (0:00:01.280) 0:03:09.492 ***** 2026-02-14 03:55:21.426976 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:21.426996 | orchestrator | 2026-02-14 03:55:21.427015 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-14 03:55:21.427033 | orchestrator | Saturday 14 February 2026 03:55:17 +0000 (0:00:00.142) 0:03:09.635 ***** 2026-02-14 03:55:21.427050 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:21.427068 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:21.427085 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:21.427103 | orchestrator | 2026-02-14 03:55:21.427121 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-14 03:55:21.427138 | orchestrator | Saturday 14 February 2026 03:55:17 +0000 (0:00:00.318) 0:03:09.953 ***** 2026-02-14 03:55:21.427156 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 03:55:21.427174 | orchestrator | 2026-02-14 03:55:21.427191 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-14 03:55:21.427210 | orchestrator | Saturday 14 February 2026 03:55:18 +0000 (0:00:00.682) 0:03:10.635 ***** 2026-02-14 03:55:21.427229 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:21.427248 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:21.427267 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:21.427285 | orchestrator | 2026-02-14 03:55:21.427337 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-14 03:55:21.427357 | orchestrator | Saturday 14 February 2026 03:55:18 +0000 (0:00:00.515) 0:03:11.150 ***** 2026-02-14 03:55:21.427376 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:55:21.427395 | orchestrator | 2026-02-14 03:55:21.427413 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-14 03:55:21.427432 | orchestrator | Saturday 14 February 2026 03:55:19 +0000 (0:00:00.576) 0:03:11.727 ***** 2026-02-14 03:55:21.427456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:21.427561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:21.427588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:21.427607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.427627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.427667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:21.427687 | orchestrator | 2026-02-14 03:55:21.427717 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-14 03:55:23.061746 | orchestrator | Saturday 14 February 2026 03:55:21 +0000 (0:00:02.289) 0:03:14.017 ***** 2026-02-14 03:55:23.061859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:23.061881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:23.061894 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:23.061908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:23.061942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:23.061969 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:23.062000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:23.062072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:23.062087 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:23.062098 | orchestrator | 2026-02-14 03:55:23.062111 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-14 03:55:23.062122 | orchestrator | Saturday 14 February 2026 03:55:22 +0000 (0:00:00.821) 0:03:14.838 ***** 2026-02-14 03:55:23.062134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:23.062156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:23.062168 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:23.062195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:25.384071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:25.384173 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:25.384193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:25.384239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:25.384251 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:25.384264 | orchestrator | 2026-02-14 03:55:25.384276 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-14 03:55:25.384289 | orchestrator | Saturday 14 February 2026 03:55:23 +0000 (0:00:00.813) 0:03:15.652 ***** 2026-02-14 03:55:25.384386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:25.384420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:25.384434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:25.384456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:25.384474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:25.384494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:31.736617 | orchestrator | 2026-02-14 03:55:31.736715 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-14 03:55:31.736731 | orchestrator | Saturday 14 February 2026 03:55:25 +0000 (0:00:02.320) 0:03:17.972 ***** 2026-02-14 03:55:31.736746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:31.736780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:31.736807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:31.736836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:31.736848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:31.736863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:31.736872 | orchestrator | 2026-02-14 03:55:31.736881 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-14 03:55:31.736890 | orchestrator | Saturday 14 February 2026 03:55:31 +0000 (0:00:05.729) 0:03:23.702 ***** 2026-02-14 03:55:31.736905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:31.736915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:31.736925 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:31.736944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:36.075554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:36.075670 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:36.075690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-14 03:55:36.075723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 03:55:36.075736 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:36.075747 | orchestrator | 2026-02-14 03:55:36.075759 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-14 03:55:36.075771 | orchestrator | Saturday 14 February 2026 03:55:31 +0000 (0:00:00.625) 0:03:24.327 ***** 2026-02-14 03:55:36.075782 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:55:36.075793 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:55:36.075804 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:55:36.075814 | orchestrator | 2026-02-14 03:55:36.075825 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-14 03:55:36.075836 | orchestrator | Saturday 14 February 2026 03:55:33 +0000 (0:00:01.527) 0:03:25.854 ***** 2026-02-14 03:55:36.075848 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:55:36.075867 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:55:36.075885 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:55:36.075901 | orchestrator | 2026-02-14 03:55:36.075920 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-14 03:55:36.075937 | orchestrator | Saturday 14 February 2026 03:55:33 +0000 (0:00:00.318) 0:03:26.173 ***** 2026-02-14 03:55:36.075980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:36.076029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:36.076060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-14 03:55:36.076084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:36.076118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:55:36.076152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:17.122455 | orchestrator | 2026-02-14 03:56:17.122568 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-14 03:56:17.122581 | orchestrator | Saturday 14 February 2026 03:55:35 +0000 (0:00:02.066) 0:03:28.239 ***** 2026-02-14 03:56:17.122589 | orchestrator | 2026-02-14 03:56:17.122597 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-14 03:56:17.122605 | orchestrator | Saturday 14 February 2026 03:55:35 +0000 (0:00:00.140) 0:03:28.380 ***** 2026-02-14 03:56:17.122613 | orchestrator | 2026-02-14 03:56:17.122620 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-14 03:56:17.122628 | orchestrator | Saturday 14 February 2026 03:55:35 +0000 (0:00:00.138) 0:03:28.518 ***** 2026-02-14 03:56:17.122635 | orchestrator | 2026-02-14 03:56:17.122643 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-14 03:56:17.122650 | orchestrator | Saturday 14 February 2026 03:55:36 +0000 (0:00:00.141) 0:03:28.660 ***** 2026-02-14 03:56:17.122658 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:56:17.122667 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:56:17.122674 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:56:17.122681 | orchestrator | 2026-02-14 03:56:17.122689 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-14 03:56:17.122696 | orchestrator | Saturday 14 February 2026 03:55:55 +0000 (0:00:19.580) 0:03:48.240 ***** 2026-02-14 03:56:17.122703 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:56:17.122711 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:56:17.122718 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:56:17.122725 | orchestrator | 2026-02-14 03:56:17.122733 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-14 03:56:17.122740 | orchestrator | 2026-02-14 03:56:17.122747 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-14 03:56:17.122755 | orchestrator | Saturday 14 February 2026 03:56:05 +0000 (0:00:09.944) 0:03:58.185 ***** 2026-02-14 03:56:17.122763 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:56:17.122772 | orchestrator | 2026-02-14 03:56:17.122779 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-14 03:56:17.122799 | orchestrator | Saturday 14 February 2026 03:56:06 +0000 (0:00:01.219) 0:03:59.404 ***** 2026-02-14 03:56:17.122807 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:17.122814 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:17.122822 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:17.122849 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:17.122857 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:17.122864 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:17.122872 | orchestrator | 2026-02-14 03:56:17.122879 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-14 03:56:17.122887 | orchestrator | Saturday 14 February 2026 03:56:07 +0000 (0:00:00.834) 0:04:00.239 ***** 2026-02-14 03:56:17.122894 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:17.122901 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:17.122908 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:17.122916 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:56:17.122924 | orchestrator | 2026-02-14 03:56:17.122931 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-14 03:56:17.122938 | orchestrator | Saturday 14 February 2026 03:56:08 +0000 (0:00:00.864) 0:04:01.104 ***** 2026-02-14 03:56:17.122947 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-14 03:56:17.122954 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-14 03:56:17.122961 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-14 03:56:17.122969 | orchestrator | 2026-02-14 03:56:17.122976 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-14 03:56:17.122984 | orchestrator | Saturday 14 February 2026 03:56:09 +0000 (0:00:00.874) 0:04:01.979 ***** 2026-02-14 03:56:17.122991 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-14 03:56:17.123000 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-14 03:56:17.123008 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-14 03:56:17.123016 | orchestrator | 2026-02-14 03:56:17.123024 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-14 03:56:17.123033 | orchestrator | Saturday 14 February 2026 03:56:10 +0000 (0:00:01.187) 0:04:03.166 ***** 2026-02-14 03:56:17.123041 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-14 03:56:17.123049 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:17.123057 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-14 03:56:17.123065 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:17.123074 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-14 03:56:17.123082 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:17.123090 | orchestrator | 2026-02-14 03:56:17.123099 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-14 03:56:17.123107 | orchestrator | Saturday 14 February 2026 03:56:11 +0000 (0:00:00.553) 0:04:03.720 ***** 2026-02-14 03:56:17.123115 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-14 03:56:17.123124 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-14 03:56:17.123132 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 03:56:17.123141 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 03:56:17.123149 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:17.123157 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 03:56:17.123166 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 03:56:17.123174 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:17.123196 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 03:56:17.123205 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 03:56:17.123214 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:17.123222 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-14 03:56:17.123230 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-14 03:56:17.123245 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-14 03:56:17.123253 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-14 03:56:17.123261 | orchestrator | 2026-02-14 03:56:17.123269 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-14 03:56:17.123277 | orchestrator | Saturday 14 February 2026 03:56:12 +0000 (0:00:01.345) 0:04:05.066 ***** 2026-02-14 03:56:17.123285 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:17.123294 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:17.123302 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:17.123310 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:56:17.123318 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:56:17.123327 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:56:17.123335 | orchestrator | 2026-02-14 03:56:17.123357 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-14 03:56:17.123365 | orchestrator | Saturday 14 February 2026 03:56:13 +0000 (0:00:01.144) 0:04:06.210 ***** 2026-02-14 03:56:17.123372 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:17.123379 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:17.123386 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:17.123394 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:56:17.123401 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:56:17.123408 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:56:17.123415 | orchestrator | 2026-02-14 03:56:17.123422 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-14 03:56:17.123430 | orchestrator | Saturday 14 February 2026 03:56:15 +0000 (0:00:01.731) 0:04:07.942 ***** 2026-02-14 03:56:17.123443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:17.123456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:17.123469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:18.906803 | orchestrator | 2026-02-14 03:56:18.906816 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-14 03:56:18.906830 | orchestrator | Saturday 14 February 2026 03:56:17 +0000 (0:00:02.257) 0:04:10.199 ***** 2026-02-14 03:56:18.906842 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 03:56:18.906854 | orchestrator | 2026-02-14 03:56:18.906865 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-14 03:56:18.906883 | orchestrator | Saturday 14 February 2026 03:56:18 +0000 (0:00:01.298) 0:04:11.497 ***** 2026-02-14 03:56:22.162812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:22.162936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:22.162952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:22.162966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:22.163136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:24.039755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:24.039866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:24.039879 | orchestrator | 2026-02-14 03:56:24.039891 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-14 03:56:24.039902 | orchestrator | Saturday 14 February 2026 03:56:22 +0000 (0:00:03.806) 0:04:15.304 ***** 2026-02-14 03:56:24.039913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:24.039942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:24.039953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:24.039963 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:24.039994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:24.040005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:24.040015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:24.040030 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:24.040039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:24.040049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:24.040065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:25.789103 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:25.789241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:25.789261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:25.789294 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:25.789306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:25.789318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:25.789329 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:25.789340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:25.789396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:25.789409 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:25.789420 | orchestrator | 2026-02-14 03:56:25.789432 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-14 03:56:25.789445 | orchestrator | Saturday 14 February 2026 03:56:24 +0000 (0:00:01.556) 0:04:16.860 ***** 2026-02-14 03:56:25.789482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:25.789504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:25.789518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:25.789529 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:25.789552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:25.789564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:25.789589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:33.120209 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:33.120326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:33.120432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:33.120449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:56:33.120462 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:33.120475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:33.120487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:33.120499 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:33.120543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:33.120565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:33.120577 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:33.120588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:56:33.120600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:56:33.120612 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:33.120623 | orchestrator | 2026-02-14 03:56:33.120635 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-14 03:56:33.120648 | orchestrator | Saturday 14 February 2026 03:56:26 +0000 (0:00:02.082) 0:04:18.943 ***** 2026-02-14 03:56:33.120659 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:33.120670 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:33.120681 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:33.120693 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 03:56:33.120704 | orchestrator | 2026-02-14 03:56:33.120715 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-14 03:56:33.120726 | orchestrator | Saturday 14 February 2026 03:56:27 +0000 (0:00:01.115) 0:04:20.058 ***** 2026-02-14 03:56:33.120738 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 03:56:33.120751 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 03:56:33.120763 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 03:56:33.120776 | orchestrator | 2026-02-14 03:56:33.120789 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-14 03:56:33.120801 | orchestrator | Saturday 14 February 2026 03:56:28 +0000 (0:00:01.062) 0:04:21.121 ***** 2026-02-14 03:56:33.120814 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 03:56:33.120826 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 03:56:33.120838 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 03:56:33.120850 | orchestrator | 2026-02-14 03:56:33.120863 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-14 03:56:33.120875 | orchestrator | Saturday 14 February 2026 03:56:29 +0000 (0:00:00.923) 0:04:22.045 ***** 2026-02-14 03:56:33.120894 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:56:33.120907 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:56:33.120920 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:56:33.120932 | orchestrator | 2026-02-14 03:56:33.120944 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-14 03:56:33.120956 | orchestrator | Saturday 14 February 2026 03:56:30 +0000 (0:00:00.566) 0:04:22.611 ***** 2026-02-14 03:56:33.120968 | orchestrator | ok: [testbed-node-3] 2026-02-14 03:56:33.120980 | orchestrator | ok: [testbed-node-4] 2026-02-14 03:56:33.120993 | orchestrator | ok: [testbed-node-5] 2026-02-14 03:56:33.121005 | orchestrator | 2026-02-14 03:56:33.121017 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-14 03:56:33.121030 | orchestrator | Saturday 14 February 2026 03:56:30 +0000 (0:00:00.493) 0:04:23.105 ***** 2026-02-14 03:56:33.121042 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-14 03:56:33.121055 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-14 03:56:33.121067 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-14 03:56:33.121079 | orchestrator | 2026-02-14 03:56:33.121092 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-14 03:56:33.121105 | orchestrator | Saturday 14 February 2026 03:56:31 +0000 (0:00:01.421) 0:04:24.526 ***** 2026-02-14 03:56:33.121130 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-14 03:56:51.252482 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-14 03:56:51.252683 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-14 03:56:51.252705 | orchestrator | 2026-02-14 03:56:51.252720 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-14 03:56:51.252733 | orchestrator | Saturday 14 February 2026 03:56:33 +0000 (0:00:01.184) 0:04:25.711 ***** 2026-02-14 03:56:51.252744 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-14 03:56:51.252755 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-14 03:56:51.252766 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-14 03:56:51.252777 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-14 03:56:51.252787 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-14 03:56:51.252798 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-14 03:56:51.252809 | orchestrator | 2026-02-14 03:56:51.252820 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-14 03:56:51.252831 | orchestrator | Saturday 14 February 2026 03:56:36 +0000 (0:00:03.714) 0:04:29.426 ***** 2026-02-14 03:56:51.252842 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:51.252855 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:51.252866 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:51.252876 | orchestrator | 2026-02-14 03:56:51.252887 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-14 03:56:51.252898 | orchestrator | Saturday 14 February 2026 03:56:37 +0000 (0:00:00.322) 0:04:29.749 ***** 2026-02-14 03:56:51.252909 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:51.252920 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:51.252931 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:51.252944 | orchestrator | 2026-02-14 03:56:51.252957 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-14 03:56:51.252970 | orchestrator | Saturday 14 February 2026 03:56:37 +0000 (0:00:00.526) 0:04:30.275 ***** 2026-02-14 03:56:51.252982 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:56:51.252994 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:56:51.253006 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:56:51.253018 | orchestrator | 2026-02-14 03:56:51.253030 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-14 03:56:51.253042 | orchestrator | Saturday 14 February 2026 03:56:38 +0000 (0:00:01.243) 0:04:31.519 ***** 2026-02-14 03:56:51.253056 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-14 03:56:51.253097 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-14 03:56:51.253109 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-14 03:56:51.253119 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-14 03:56:51.253131 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-14 03:56:51.253142 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-14 03:56:51.253153 | orchestrator | 2026-02-14 03:56:51.253164 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-14 03:56:51.253174 | orchestrator | Saturday 14 February 2026 03:56:42 +0000 (0:00:03.285) 0:04:34.804 ***** 2026-02-14 03:56:51.253185 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:56:51.253196 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:56:51.253207 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:56:51.253218 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-14 03:56:51.253229 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:56:51.253240 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-14 03:56:51.253250 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:56:51.253261 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-14 03:56:51.253272 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:56:51.253283 | orchestrator | 2026-02-14 03:56:51.253293 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-14 03:56:51.253304 | orchestrator | Saturday 14 February 2026 03:56:45 +0000 (0:00:03.298) 0:04:38.103 ***** 2026-02-14 03:56:51.253315 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:51.253326 | orchestrator | 2026-02-14 03:56:51.253337 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-14 03:56:51.253349 | orchestrator | Saturday 14 February 2026 03:56:45 +0000 (0:00:00.135) 0:04:38.238 ***** 2026-02-14 03:56:51.253359 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:51.253392 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:51.253404 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:51.253414 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:51.253425 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:51.253436 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:51.253446 | orchestrator | 2026-02-14 03:56:51.253457 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-14 03:56:51.253468 | orchestrator | Saturday 14 February 2026 03:56:46 +0000 (0:00:00.860) 0:04:39.099 ***** 2026-02-14 03:56:51.253479 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 03:56:51.253489 | orchestrator | 2026-02-14 03:56:51.253500 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-14 03:56:51.253511 | orchestrator | Saturday 14 February 2026 03:56:47 +0000 (0:00:00.728) 0:04:39.828 ***** 2026-02-14 03:56:51.253538 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:56:51.253568 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:56:51.253580 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:56:51.253590 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:56:51.253601 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:56:51.253611 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:56:51.253622 | orchestrator | 2026-02-14 03:56:51.253633 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-14 03:56:51.253643 | orchestrator | Saturday 14 February 2026 03:56:48 +0000 (0:00:00.780) 0:04:40.608 ***** 2026-02-14 03:56:51.253667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:51.253683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:51.253694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:56:51.253707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:51.253732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.776966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.777001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.777015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.777026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:57.777039 | orchestrator | 2026-02-14 03:56:57.777053 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-14 03:56:57.777066 | orchestrator | Saturday 14 February 2026 03:56:51 +0000 (0:00:03.690) 0:04:44.299 ***** 2026-02-14 03:56:57.777078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:57.777096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:57.777127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:58.197475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:58.197581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:56:58.197600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:56:58.197614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:56:58.197794 | orchestrator | 2026-02-14 03:56:58.197807 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-14 03:56:58.197826 | orchestrator | Saturday 14 February 2026 03:56:58 +0000 (0:00:06.488) 0:04:50.787 ***** 2026-02-14 03:57:19.201671 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:19.201750 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:19.201756 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:19.201761 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.201765 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.201769 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.201774 | orchestrator | 2026-02-14 03:57:19.201779 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-14 03:57:19.201785 | orchestrator | Saturday 14 February 2026 03:56:59 +0000 (0:00:01.315) 0:04:52.102 ***** 2026-02-14 03:57:19.201789 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-14 03:57:19.201794 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-14 03:57:19.201798 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-14 03:57:19.201802 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-14 03:57:19.201805 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-14 03:57:19.201809 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-14 03:57:19.201814 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.201818 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-14 03:57:19.201821 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-14 03:57:19.201825 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.201829 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-14 03:57:19.201833 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.201836 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-14 03:57:19.201840 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-14 03:57:19.201860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-14 03:57:19.201864 | orchestrator | 2026-02-14 03:57:19.201868 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-14 03:57:19.201872 | orchestrator | Saturday 14 February 2026 03:57:03 +0000 (0:00:03.534) 0:04:55.637 ***** 2026-02-14 03:57:19.201875 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:19.201879 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:19.201883 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:19.201887 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.201890 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.201894 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.201898 | orchestrator | 2026-02-14 03:57:19.201901 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-14 03:57:19.201905 | orchestrator | Saturday 14 February 2026 03:57:03 +0000 (0:00:00.616) 0:04:56.254 ***** 2026-02-14 03:57:19.201909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-14 03:57:19.201913 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-14 03:57:19.201917 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-14 03:57:19.201920 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-14 03:57:19.201924 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-14 03:57:19.201928 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-14 03:57:19.201941 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201949 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201953 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201956 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.201960 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201964 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.201967 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-14 03:57:19.201971 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.201975 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.201978 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.201993 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.201999 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.202005 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.202011 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-14 03:57:19.202066 | orchestrator | 2026-02-14 03:57:19.202072 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-14 03:57:19.202076 | orchestrator | Saturday 14 February 2026 03:57:08 +0000 (0:00:05.210) 0:05:01.464 ***** 2026-02-14 03:57:19.202085 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:57:19.202089 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:57:19.202092 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 03:57:19.202096 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:57:19.202100 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-14 03:57:19.202104 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-14 03:57:19.202107 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:57:19.202111 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-14 03:57:19.202115 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-14 03:57:19.202119 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:57:19.202122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:57:19.202126 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 03:57:19.202130 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-14 03:57:19.202133 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.202137 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-14 03:57:19.202141 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.202145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-14 03:57:19.202149 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.202153 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:57:19.202156 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:57:19.202160 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-14 03:57:19.202164 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:57:19.202167 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:57:19.202171 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-14 03:57:19.202175 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:57:19.202178 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:57:19.202182 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-14 03:57:19.202186 | orchestrator | 2026-02-14 03:57:19.202189 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-14 03:57:19.202197 | orchestrator | Saturday 14 February 2026 03:57:15 +0000 (0:00:06.837) 0:05:08.302 ***** 2026-02-14 03:57:19.202201 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:19.202205 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:19.202209 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:19.202212 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.202216 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.202220 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.202223 | orchestrator | 2026-02-14 03:57:19.202227 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-14 03:57:19.202231 | orchestrator | Saturday 14 February 2026 03:57:16 +0000 (0:00:00.789) 0:05:09.091 ***** 2026-02-14 03:57:19.202235 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:19.202242 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:19.202247 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:19.202251 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.202255 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.202259 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.202264 | orchestrator | 2026-02-14 03:57:19.202268 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-14 03:57:19.202272 | orchestrator | Saturday 14 February 2026 03:57:17 +0000 (0:00:00.627) 0:05:09.719 ***** 2026-02-14 03:57:19.202276 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:19.202281 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:19.202285 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:57:19.202290 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:19.202294 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:57:19.202298 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:57:19.202302 | orchestrator | 2026-02-14 03:57:19.202311 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-14 03:57:20.425370 | orchestrator | Saturday 14 February 2026 03:57:19 +0000 (0:00:02.061) 0:05:11.780 ***** 2026-02-14 03:57:20.425585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:57:20.425611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:57:20.425625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:57:20.425638 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:20.425670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:57:20.425705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:57:20.425739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:57:20.425785 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:20.425797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-14 03:57:20.425809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-14 03:57:20.425820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-14 03:57:20.425845 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:20.425860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:57:20.425882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:57:23.841963 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:23.842132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:57:23.842154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:57:23.842167 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:23.842179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-14 03:57:23.842190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 03:57:23.842226 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:23.842238 | orchestrator | 2026-02-14 03:57:23.842250 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-14 03:57:23.842263 | orchestrator | Saturday 14 February 2026 03:57:20 +0000 (0:00:01.401) 0:05:13.182 ***** 2026-02-14 03:57:23.842275 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-14 03:57:23.842286 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842312 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:57:23.842323 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-14 03:57:23.842335 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842346 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:57:23.842357 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-14 03:57:23.842368 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842379 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:57:23.842461 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-14 03:57:23.842475 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842489 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:57:23.842501 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-14 03:57:23.842513 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842526 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:57:23.842538 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-14 03:57:23.842551 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-14 03:57:23.842564 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:57:23.842575 | orchestrator | 2026-02-14 03:57:23.842588 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-14 03:57:23.842601 | orchestrator | Saturday 14 February 2026 03:57:21 +0000 (0:00:00.926) 0:05:14.108 ***** 2026-02-14 03:57:23.842636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:57:23.842652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:57:23.842674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-14 03:57:23.842694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:57:23.842708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:57:23.842730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-14 03:58:17.929646 | orchestrator | 2026-02-14 03:58:17.929659 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-14 03:58:17.929672 | orchestrator | Saturday 14 February 2026 03:57:24 +0000 (0:00:02.583) 0:05:16.692 ***** 2026-02-14 03:58:17.929684 | orchestrator | skipping: [testbed-node-3] 2026-02-14 03:58:17.929696 | orchestrator | skipping: [testbed-node-4] 2026-02-14 03:58:17.929707 | orchestrator | skipping: [testbed-node-5] 2026-02-14 03:58:17.929717 | orchestrator | skipping: [testbed-node-0] 2026-02-14 03:58:17.929728 | orchestrator | skipping: [testbed-node-1] 2026-02-14 03:58:17.929739 | orchestrator | skipping: [testbed-node-2] 2026-02-14 03:58:17.929750 | orchestrator | 2026-02-14 03:58:17.929761 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929771 | orchestrator | Saturday 14 February 2026 03:57:24 +0000 (0:00:00.791) 0:05:17.483 ***** 2026-02-14 03:58:17.929782 | orchestrator | 2026-02-14 03:58:17.929793 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929805 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.139) 0:05:17.623 ***** 2026-02-14 03:58:17.929818 | orchestrator | 2026-02-14 03:58:17.929830 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929848 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.139) 0:05:17.763 ***** 2026-02-14 03:58:17.929860 | orchestrator | 2026-02-14 03:58:17.929874 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929886 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.140) 0:05:17.903 ***** 2026-02-14 03:58:17.929898 | orchestrator | 2026-02-14 03:58:17.929910 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929922 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.162) 0:05:18.066 ***** 2026-02-14 03:58:17.929934 | orchestrator | 2026-02-14 03:58:17.929946 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-14 03:58:17.929957 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.295) 0:05:18.361 ***** 2026-02-14 03:58:17.929969 | orchestrator | 2026-02-14 03:58:17.929982 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-14 03:58:17.929993 | orchestrator | Saturday 14 February 2026 03:57:25 +0000 (0:00:00.138) 0:05:18.500 ***** 2026-02-14 03:58:17.930004 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:58:17.930076 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:58:17.930092 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:58:17.930103 | orchestrator | 2026-02-14 03:58:17.930114 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-14 03:58:17.930125 | orchestrator | Saturday 14 February 2026 03:57:32 +0000 (0:00:06.910) 0:05:25.410 ***** 2026-02-14 03:58:17.930135 | orchestrator | changed: [testbed-node-0] 2026-02-14 03:58:17.930146 | orchestrator | changed: [testbed-node-2] 2026-02-14 03:58:17.930157 | orchestrator | changed: [testbed-node-1] 2026-02-14 03:58:17.930168 | orchestrator | 2026-02-14 03:58:17.930178 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-14 03:58:17.930197 | orchestrator | Saturday 14 February 2026 03:57:52 +0000 (0:00:19.235) 0:05:44.646 ***** 2026-02-14 03:58:17.930208 | orchestrator | changed: [testbed-node-3] 2026-02-14 03:58:17.930218 | orchestrator | changed: [testbed-node-5] 2026-02-14 03:58:17.930229 | orchestrator | changed: [testbed-node-4] 2026-02-14 03:58:17.930240 | orchestrator | 2026-02-14 03:58:17.930259 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-14 04:00:41.359526 | orchestrator | Saturday 14 February 2026 03:58:17 +0000 (0:00:25.864) 0:06:10.510 ***** 2026-02-14 04:00:41.359645 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:00:41.359712 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:00:41.359725 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:00:41.359737 | orchestrator | 2026-02-14 04:00:41.359749 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-14 04:00:41.359761 | orchestrator | Saturday 14 February 2026 03:58:58 +0000 (0:00:40.392) 0:06:50.902 ***** 2026-02-14 04:00:41.359772 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-14 04:00:41.359785 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-14 04:00:41.359796 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-14 04:00:41.359806 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:00:41.359817 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:00:41.359828 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:00:41.359839 | orchestrator | 2026-02-14 04:00:41.359850 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-14 04:00:41.359861 | orchestrator | Saturday 14 February 2026 03:59:04 +0000 (0:00:06.221) 0:06:57.124 ***** 2026-02-14 04:00:41.359872 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:00:41.359883 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:00:41.359894 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:00:41.359905 | orchestrator | 2026-02-14 04:00:41.359915 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-14 04:00:41.359927 | orchestrator | Saturday 14 February 2026 03:59:05 +0000 (0:00:00.748) 0:06:57.873 ***** 2026-02-14 04:00:41.359938 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:00:41.359949 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:00:41.359960 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:00:41.359971 | orchestrator | 2026-02-14 04:00:41.359982 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-14 04:00:41.359994 | orchestrator | Saturday 14 February 2026 03:59:35 +0000 (0:00:29.913) 0:07:27.787 ***** 2026-02-14 04:00:41.360005 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:00:41.360016 | orchestrator | 2026-02-14 04:00:41.360027 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-14 04:00:41.360038 | orchestrator | Saturday 14 February 2026 03:59:35 +0000 (0:00:00.146) 0:07:27.933 ***** 2026-02-14 04:00:41.360049 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:00:41.360063 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:00:41.360075 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:41.360087 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.360100 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:41.360114 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-14 04:00:41.360128 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 04:00:41.360141 | orchestrator | 2026-02-14 04:00:41.360154 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-14 04:00:41.360167 | orchestrator | Saturday 14 February 2026 03:59:56 +0000 (0:00:21.291) 0:07:49.224 ***** 2026-02-14 04:00:41.360179 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:00:41.360192 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:00:41.360204 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:41.360237 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:41.360250 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:00:41.360262 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.360274 | orchestrator | 2026-02-14 04:00:41.360286 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-14 04:00:41.360299 | orchestrator | Saturday 14 February 2026 04:00:04 +0000 (0:00:07.990) 0:07:57.214 ***** 2026-02-14 04:00:41.360326 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:00:41.360339 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:00:41.360352 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.360365 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:41.360378 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:41.360390 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-14 04:00:41.360401 | orchestrator | 2026-02-14 04:00:41.360411 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-14 04:00:41.360422 | orchestrator | Saturday 14 February 2026 04:00:08 +0000 (0:00:03.608) 0:08:00.823 ***** 2026-02-14 04:00:41.360433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 04:00:41.360444 | orchestrator | 2026-02-14 04:00:41.360455 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-14 04:00:41.360466 | orchestrator | Saturday 14 February 2026 04:00:21 +0000 (0:00:13.021) 0:08:13.845 ***** 2026-02-14 04:00:41.360476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 04:00:41.360487 | orchestrator | 2026-02-14 04:00:41.360498 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-14 04:00:41.360508 | orchestrator | Saturday 14 February 2026 04:00:22 +0000 (0:00:01.534) 0:08:15.380 ***** 2026-02-14 04:00:41.360519 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:00:41.360530 | orchestrator | 2026-02-14 04:00:41.360541 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-14 04:00:41.360552 | orchestrator | Saturday 14 February 2026 04:00:24 +0000 (0:00:01.619) 0:08:16.999 ***** 2026-02-14 04:00:41.360563 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 04:00:41.360573 | orchestrator | 2026-02-14 04:00:41.360584 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-14 04:00:41.360595 | orchestrator | Saturday 14 February 2026 04:00:36 +0000 (0:00:11.946) 0:08:28.945 ***** 2026-02-14 04:00:41.360605 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:00:41.360617 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:00:41.360628 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:00:41.360673 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:41.360685 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:41.360696 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:41.360707 | orchestrator | 2026-02-14 04:00:41.360718 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-14 04:00:41.360729 | orchestrator | 2026-02-14 04:00:41.360740 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-14 04:00:41.360751 | orchestrator | Saturday 14 February 2026 04:00:38 +0000 (0:00:01.776) 0:08:30.722 ***** 2026-02-14 04:00:41.360762 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:00:41.360773 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:00:41.360784 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:00:41.360795 | orchestrator | 2026-02-14 04:00:41.360806 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-14 04:00:41.360816 | orchestrator | 2026-02-14 04:00:41.360827 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-14 04:00:41.360838 | orchestrator | Saturday 14 February 2026 04:00:39 +0000 (0:00:00.966) 0:08:31.688 ***** 2026-02-14 04:00:41.360849 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.360860 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:41.360871 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:41.360890 | orchestrator | 2026-02-14 04:00:41.360901 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-14 04:00:41.360912 | orchestrator | 2026-02-14 04:00:41.360923 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-14 04:00:41.360934 | orchestrator | Saturday 14 February 2026 04:00:39 +0000 (0:00:00.724) 0:08:32.413 ***** 2026-02-14 04:00:41.360945 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-14 04:00:41.360956 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-14 04:00:41.360967 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-14 04:00:41.360978 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-14 04:00:41.360989 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-14 04:00:41.361000 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361011 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:00:41.361021 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-14 04:00:41.361032 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-14 04:00:41.361043 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-14 04:00:41.361054 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-14 04:00:41.361065 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-14 04:00:41.361076 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361087 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:00:41.361098 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-14 04:00:41.361109 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-14 04:00:41.361119 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-14 04:00:41.361131 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-14 04:00:41.361141 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-14 04:00:41.361152 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361163 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:00:41.361174 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-14 04:00:41.361199 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-14 04:00:41.361210 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-14 04:00:41.361221 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-14 04:00:41.361237 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-14 04:00:41.361248 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361269 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.361281 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-14 04:00:41.361292 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-14 04:00:41.361303 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-14 04:00:41.361314 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-14 04:00:41.361325 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-14 04:00:41.361336 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361347 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:41.361358 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-14 04:00:41.361369 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-14 04:00:41.361380 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-14 04:00:41.361391 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-14 04:00:41.361402 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-14 04:00:41.361413 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-14 04:00:41.361431 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:41.361442 | orchestrator | 2026-02-14 04:00:41.361453 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-14 04:00:41.361464 | orchestrator | 2026-02-14 04:00:41.361475 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-14 04:00:41.361486 | orchestrator | Saturday 14 February 2026 04:00:41 +0000 (0:00:01.327) 0:08:33.741 ***** 2026-02-14 04:00:41.361497 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-14 04:00:41.361508 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-14 04:00:41.361519 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:41.361537 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-14 04:00:43.414291 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-14 04:00:43.414388 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:43.414402 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-14 04:00:43.414418 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-14 04:00:43.414437 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:43.414456 | orchestrator | 2026-02-14 04:00:43.414475 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-14 04:00:43.414494 | orchestrator | 2026-02-14 04:00:43.414512 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-14 04:00:43.414531 | orchestrator | Saturday 14 February 2026 04:00:41 +0000 (0:00:00.571) 0:08:34.312 ***** 2026-02-14 04:00:43.414551 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:43.414568 | orchestrator | 2026-02-14 04:00:43.414587 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-14 04:00:43.414606 | orchestrator | 2026-02-14 04:00:43.414624 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-14 04:00:43.414672 | orchestrator | Saturday 14 February 2026 04:00:42 +0000 (0:00:00.850) 0:08:35.162 ***** 2026-02-14 04:00:43.414686 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:43.414698 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:43.414709 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:43.414720 | orchestrator | 2026-02-14 04:00:43.414731 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:00:43.414742 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:00:43.414756 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-14 04:00:43.414768 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-14 04:00:43.414778 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-14 04:00:43.414789 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-14 04:00:43.414800 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-14 04:00:43.414824 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-14 04:00:43.414835 | orchestrator | 2026-02-14 04:00:43.414859 | orchestrator | 2026-02-14 04:00:43.414873 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:00:43.414886 | orchestrator | Saturday 14 February 2026 04:00:43 +0000 (0:00:00.452) 0:08:35.615 ***** 2026-02-14 04:00:43.414899 | orchestrator | =============================================================================== 2026-02-14 04:00:43.414937 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.39s 2026-02-14 04:00:43.414950 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.28s 2026-02-14 04:00:43.414963 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.91s 2026-02-14 04:00:43.414991 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.86s 2026-02-14 04:00:43.415005 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.29s 2026-02-14 04:00:43.415018 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.99s 2026-02-14 04:00:43.415031 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.58s 2026-02-14 04:00:43.415043 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.24s 2026-02-14 04:00:43.415056 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.80s 2026-02-14 04:00:43.415068 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.24s 2026-02-14 04:00:43.415081 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.46s 2026-02-14 04:00:43.415094 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.02s 2026-02-14 04:00:43.415107 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.95s 2026-02-14 04:00:43.415120 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.95s 2026-02-14 04:00:43.415131 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.10s 2026-02-14 04:00:43.415142 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.94s 2026-02-14 04:00:43.415153 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.99s 2026-02-14 04:00:43.415164 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.57s 2026-02-14 04:00:43.415175 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.22s 2026-02-14 04:00:43.415186 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.91s 2026-02-14 04:00:45.796271 | orchestrator | 2026-02-14 04:00:45 | INFO  | Task 33b08a82-833e-4e00-998d-4053a2a8e4cc (horizon) was prepared for execution. 2026-02-14 04:00:45.796389 | orchestrator | 2026-02-14 04:00:45 | INFO  | It takes a moment until task 33b08a82-833e-4e00-998d-4053a2a8e4cc (horizon) has been started and output is visible here. 2026-02-14 04:00:52.998244 | orchestrator | 2026-02-14 04:00:52.998377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:00:52.998413 | orchestrator | 2026-02-14 04:00:52.998435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:00:52.998455 | orchestrator | Saturday 14 February 2026 04:00:49 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-02-14 04:00:52.998474 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:52.998492 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:52.998508 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:52.998527 | orchestrator | 2026-02-14 04:00:52.998547 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:00:52.998566 | orchestrator | Saturday 14 February 2026 04:00:50 +0000 (0:00:00.323) 0:00:00.590 ***** 2026-02-14 04:00:52.998583 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-14 04:00:52.998684 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-14 04:00:52.998709 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-14 04:00:52.998727 | orchestrator | 2026-02-14 04:00:52.998744 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-14 04:00:52.998761 | orchestrator | 2026-02-14 04:00:52.998779 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-14 04:00:52.998798 | orchestrator | Saturday 14 February 2026 04:00:50 +0000 (0:00:00.441) 0:00:01.032 ***** 2026-02-14 04:00:52.998850 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:00:52.998870 | orchestrator | 2026-02-14 04:00:52.998889 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-14 04:00:52.998906 | orchestrator | Saturday 14 February 2026 04:00:51 +0000 (0:00:00.517) 0:00:01.549 ***** 2026-02-14 04:00:52.998957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:00:52.999020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:00:52.999069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:00:52.999092 | orchestrator | 2026-02-14 04:00:52.999113 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-14 04:00:52.999131 | orchestrator | Saturday 14 February 2026 04:00:52 +0000 (0:00:01.199) 0:00:02.749 ***** 2026-02-14 04:00:52.999150 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:52.999169 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:52.999186 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:52.999204 | orchestrator | 2026-02-14 04:00:52.999218 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-14 04:00:52.999229 | orchestrator | Saturday 14 February 2026 04:00:52 +0000 (0:00:00.464) 0:00:03.214 ***** 2026-02-14 04:00:52.999250 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-14 04:00:59.093033 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-14 04:00:59.093150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-14 04:00:59.093168 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-14 04:00:59.093182 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-14 04:00:59.093220 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-14 04:00:59.093233 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-14 04:00:59.093246 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-14 04:00:59.093259 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-14 04:00:59.093271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-14 04:00:59.093283 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-14 04:00:59.093295 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-14 04:00:59.093308 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-14 04:00:59.093320 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-14 04:00:59.093332 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-14 04:00:59.093344 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-14 04:00:59.093357 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-14 04:00:59.093369 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-14 04:00:59.093381 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-14 04:00:59.093393 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-14 04:00:59.093406 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-14 04:00:59.093419 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-14 04:00:59.093432 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-14 04:00:59.093444 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-14 04:00:59.093459 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-14 04:00:59.093473 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-14 04:00:59.093501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-14 04:00:59.093514 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-14 04:00:59.093526 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-14 04:00:59.093539 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-14 04:00:59.093551 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-14 04:00:59.093565 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-14 04:00:59.093619 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-14 04:00:59.093640 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-14 04:00:59.093660 | orchestrator | 2026-02-14 04:00:59.093675 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.093690 | orchestrator | Saturday 14 February 2026 04:00:53 +0000 (0:00:00.788) 0:00:04.002 ***** 2026-02-14 04:00:59.093704 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.093716 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.093728 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.093741 | orchestrator | 2026-02-14 04:00:59.093754 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.093767 | orchestrator | Saturday 14 February 2026 04:00:53 +0000 (0:00:00.320) 0:00:04.322 ***** 2026-02-14 04:00:59.093780 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.093795 | orchestrator | 2026-02-14 04:00:59.093827 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.093842 | orchestrator | Saturday 14 February 2026 04:00:54 +0000 (0:00:00.311) 0:00:04.633 ***** 2026-02-14 04:00:59.093854 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.093867 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.093879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.093892 | orchestrator | 2026-02-14 04:00:59.093905 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.093918 | orchestrator | Saturday 14 February 2026 04:00:54 +0000 (0:00:00.312) 0:00:04.945 ***** 2026-02-14 04:00:59.093930 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.093942 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.093954 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.093968 | orchestrator | 2026-02-14 04:00:59.093981 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.093994 | orchestrator | Saturday 14 February 2026 04:00:54 +0000 (0:00:00.328) 0:00:05.273 ***** 2026-02-14 04:00:59.094006 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094081 | orchestrator | 2026-02-14 04:00:59.094098 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.094111 | orchestrator | Saturday 14 February 2026 04:00:55 +0000 (0:00:00.146) 0:00:05.420 ***** 2026-02-14 04:00:59.094123 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094136 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.094183 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.094197 | orchestrator | 2026-02-14 04:00:59.094210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.094223 | orchestrator | Saturday 14 February 2026 04:00:55 +0000 (0:00:00.298) 0:00:05.719 ***** 2026-02-14 04:00:59.094237 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.094250 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.094263 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.094276 | orchestrator | 2026-02-14 04:00:59.094289 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.094302 | orchestrator | Saturday 14 February 2026 04:00:55 +0000 (0:00:00.521) 0:00:06.241 ***** 2026-02-14 04:00:59.094315 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094329 | orchestrator | 2026-02-14 04:00:59.094342 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.094355 | orchestrator | Saturday 14 February 2026 04:00:56 +0000 (0:00:00.139) 0:00:06.381 ***** 2026-02-14 04:00:59.094368 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094381 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.094393 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.094407 | orchestrator | 2026-02-14 04:00:59.094420 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.094433 | orchestrator | Saturday 14 February 2026 04:00:56 +0000 (0:00:00.330) 0:00:06.711 ***** 2026-02-14 04:00:59.094446 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.094459 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.094473 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.094496 | orchestrator | 2026-02-14 04:00:59.094509 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.094522 | orchestrator | Saturday 14 February 2026 04:00:56 +0000 (0:00:00.320) 0:00:07.031 ***** 2026-02-14 04:00:59.094535 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094548 | orchestrator | 2026-02-14 04:00:59.094561 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.094604 | orchestrator | Saturday 14 February 2026 04:00:56 +0000 (0:00:00.137) 0:00:07.169 ***** 2026-02-14 04:00:59.094618 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094639 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.094652 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.094663 | orchestrator | 2026-02-14 04:00:59.094675 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.094687 | orchestrator | Saturday 14 February 2026 04:00:57 +0000 (0:00:00.539) 0:00:07.709 ***** 2026-02-14 04:00:59.094699 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.094712 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.094724 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.094737 | orchestrator | 2026-02-14 04:00:59.094749 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.094760 | orchestrator | Saturday 14 February 2026 04:00:57 +0000 (0:00:00.329) 0:00:08.038 ***** 2026-02-14 04:00:59.094773 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094785 | orchestrator | 2026-02-14 04:00:59.094797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.094809 | orchestrator | Saturday 14 February 2026 04:00:57 +0000 (0:00:00.129) 0:00:08.168 ***** 2026-02-14 04:00:59.094821 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094833 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.094845 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.094858 | orchestrator | 2026-02-14 04:00:59.094871 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.094884 | orchestrator | Saturday 14 February 2026 04:00:58 +0000 (0:00:00.289) 0:00:08.457 ***** 2026-02-14 04:00:59.094897 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:00:59.094909 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:00:59.094921 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:00:59.094934 | orchestrator | 2026-02-14 04:00:59.094947 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:00:59.094961 | orchestrator | Saturday 14 February 2026 04:00:58 +0000 (0:00:00.323) 0:00:08.781 ***** 2026-02-14 04:00:59.094974 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.094987 | orchestrator | 2026-02-14 04:00:59.094999 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:00:59.095011 | orchestrator | Saturday 14 February 2026 04:00:58 +0000 (0:00:00.320) 0:00:09.101 ***** 2026-02-14 04:00:59.095024 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:00:59.095037 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:00:59.095051 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:00:59.095064 | orchestrator | 2026-02-14 04:00:59.095078 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:00:59.095107 | orchestrator | Saturday 14 February 2026 04:00:59 +0000 (0:00:00.312) 0:00:09.413 ***** 2026-02-14 04:01:13.065935 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:01:13.066115 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:01:13.066134 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:01:13.066147 | orchestrator | 2026-02-14 04:01:13.066160 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:01:13.066173 | orchestrator | Saturday 14 February 2026 04:00:59 +0000 (0:00:00.320) 0:00:09.734 ***** 2026-02-14 04:01:13.066185 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066197 | orchestrator | 2026-02-14 04:01:13.066209 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:01:13.066245 | orchestrator | Saturday 14 February 2026 04:00:59 +0000 (0:00:00.139) 0:00:09.873 ***** 2026-02-14 04:01:13.066257 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066268 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.066280 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.066292 | orchestrator | 2026-02-14 04:01:13.066303 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:01:13.066314 | orchestrator | Saturday 14 February 2026 04:00:59 +0000 (0:00:00.300) 0:00:10.173 ***** 2026-02-14 04:01:13.066326 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:01:13.066337 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:01:13.066348 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:01:13.066359 | orchestrator | 2026-02-14 04:01:13.066370 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:01:13.066381 | orchestrator | Saturday 14 February 2026 04:01:00 +0000 (0:00:00.524) 0:00:10.698 ***** 2026-02-14 04:01:13.066393 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066404 | orchestrator | 2026-02-14 04:01:13.066415 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:01:13.066426 | orchestrator | Saturday 14 February 2026 04:01:00 +0000 (0:00:00.141) 0:00:10.840 ***** 2026-02-14 04:01:13.066437 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066448 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.066459 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.066471 | orchestrator | 2026-02-14 04:01:13.066485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:01:13.066498 | orchestrator | Saturday 14 February 2026 04:01:00 +0000 (0:00:00.295) 0:00:11.135 ***** 2026-02-14 04:01:13.066546 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:01:13.066561 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:01:13.066574 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:01:13.066587 | orchestrator | 2026-02-14 04:01:13.066600 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:01:13.066613 | orchestrator | Saturday 14 February 2026 04:01:01 +0000 (0:00:00.355) 0:00:11.491 ***** 2026-02-14 04:01:13.066626 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066639 | orchestrator | 2026-02-14 04:01:13.066652 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:01:13.066664 | orchestrator | Saturday 14 February 2026 04:01:01 +0000 (0:00:00.135) 0:00:11.626 ***** 2026-02-14 04:01:13.066678 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066691 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.066702 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.066713 | orchestrator | 2026-02-14 04:01:13.066724 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-14 04:01:13.066735 | orchestrator | Saturday 14 February 2026 04:01:01 +0000 (0:00:00.517) 0:00:12.144 ***** 2026-02-14 04:01:13.066747 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:01:13.066758 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:01:13.066769 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:01:13.066780 | orchestrator | 2026-02-14 04:01:13.066791 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-14 04:01:13.066816 | orchestrator | Saturday 14 February 2026 04:01:02 +0000 (0:00:00.328) 0:00:12.472 ***** 2026-02-14 04:01:13.066828 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066839 | orchestrator | 2026-02-14 04:01:13.066850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-14 04:01:13.066862 | orchestrator | Saturday 14 February 2026 04:01:02 +0000 (0:00:00.131) 0:00:12.604 ***** 2026-02-14 04:01:13.066873 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.066884 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.066895 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.066906 | orchestrator | 2026-02-14 04:01:13.066917 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-14 04:01:13.066928 | orchestrator | Saturday 14 February 2026 04:01:02 +0000 (0:00:00.305) 0:00:12.910 ***** 2026-02-14 04:01:13.066949 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:01:13.066960 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:01:13.066971 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:01:13.066982 | orchestrator | 2026-02-14 04:01:13.066993 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-14 04:01:13.067087 | orchestrator | Saturday 14 February 2026 04:01:04 +0000 (0:00:01.830) 0:00:14.740 ***** 2026-02-14 04:01:13.067103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-14 04:01:13.067115 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-14 04:01:13.067126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-14 04:01:13.067137 | orchestrator | 2026-02-14 04:01:13.067148 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-14 04:01:13.067159 | orchestrator | Saturday 14 February 2026 04:01:06 +0000 (0:00:01.941) 0:00:16.681 ***** 2026-02-14 04:01:13.067170 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-14 04:01:13.067182 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-14 04:01:13.067193 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-14 04:01:13.067204 | orchestrator | 2026-02-14 04:01:13.067215 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-14 04:01:13.067246 | orchestrator | Saturday 14 February 2026 04:01:08 +0000 (0:00:01.805) 0:00:18.487 ***** 2026-02-14 04:01:13.067258 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-14 04:01:13.067269 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-14 04:01:13.067281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-14 04:01:13.067292 | orchestrator | 2026-02-14 04:01:13.067303 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-14 04:01:13.067314 | orchestrator | Saturday 14 February 2026 04:01:09 +0000 (0:00:01.540) 0:00:20.027 ***** 2026-02-14 04:01:13.067325 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.067336 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.067347 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.067358 | orchestrator | 2026-02-14 04:01:13.067368 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-14 04:01:13.067380 | orchestrator | Saturday 14 February 2026 04:01:10 +0000 (0:00:00.501) 0:00:20.529 ***** 2026-02-14 04:01:13.067391 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.067402 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.067413 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:13.067458 | orchestrator | 2026-02-14 04:01:13.067487 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-14 04:01:13.067532 | orchestrator | Saturday 14 February 2026 04:01:10 +0000 (0:00:00.312) 0:00:20.842 ***** 2026-02-14 04:01:13.067546 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:01:13.067557 | orchestrator | 2026-02-14 04:01:13.067569 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-14 04:01:13.067580 | orchestrator | Saturday 14 February 2026 04:01:11 +0000 (0:00:00.632) 0:00:21.474 ***** 2026-02-14 04:01:13.067606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:01:13.067645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:01:13.701447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:01:13.701611 | orchestrator | 2026-02-14 04:01:13.701630 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-14 04:01:13.701644 | orchestrator | Saturday 14 February 2026 04:01:13 +0000 (0:00:01.904) 0:00:23.379 ***** 2026-02-14 04:01:13.701678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:13.701700 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:13.701721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:13.701733 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:13.701755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:16.233135 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:16.233254 | orchestrator | 2026-02-14 04:01:16.233269 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-14 04:01:16.233280 | orchestrator | Saturday 14 February 2026 04:01:13 +0000 (0:00:00.645) 0:00:24.025 ***** 2026-02-14 04:01:16.233294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:16.233308 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:01:16.233336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:16.233366 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:01:16.233408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 04:01:16.233420 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:01:16.233430 | orchestrator | 2026-02-14 04:01:16.233440 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-14 04:01:16.233456 | orchestrator | Saturday 14 February 2026 04:01:14 +0000 (0:00:00.882) 0:00:24.907 ***** 2026-02-14 04:01:16.233481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:02:00.087304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:02:00.087522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 04:02:00.087538 | orchestrator | 2026-02-14 04:02:00.087545 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-14 04:02:00.087552 | orchestrator | Saturday 14 February 2026 04:01:16 +0000 (0:00:01.646) 0:00:26.554 ***** 2026-02-14 04:02:00.087558 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:02:00.087565 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:02:00.087571 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:02:00.087576 | orchestrator | 2026-02-14 04:02:00.087582 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-14 04:02:00.087587 | orchestrator | Saturday 14 February 2026 04:01:16 +0000 (0:00:00.297) 0:00:26.851 ***** 2026-02-14 04:02:00.087593 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:02:00.087599 | orchestrator | 2026-02-14 04:02:00.087605 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-14 04:02:00.087610 | orchestrator | Saturday 14 February 2026 04:01:17 +0000 (0:00:00.569) 0:00:27.420 ***** 2026-02-14 04:02:00.087616 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:02:00.087621 | orchestrator | 2026-02-14 04:02:00.087626 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-14 04:02:00.087632 | orchestrator | Saturday 14 February 2026 04:01:19 +0000 (0:00:02.353) 0:00:29.773 ***** 2026-02-14 04:02:00.087643 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:02:00.087649 | orchestrator | 2026-02-14 04:02:00.087655 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-14 04:02:00.087660 | orchestrator | Saturday 14 February 2026 04:01:22 +0000 (0:00:02.617) 0:00:32.390 ***** 2026-02-14 04:02:00.087666 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:02:00.087671 | orchestrator | 2026-02-14 04:02:00.087677 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-14 04:02:00.087682 | orchestrator | Saturday 14 February 2026 04:01:37 +0000 (0:00:15.939) 0:00:48.330 ***** 2026-02-14 04:02:00.087688 | orchestrator | 2026-02-14 04:02:00.087693 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-14 04:02:00.087699 | orchestrator | Saturday 14 February 2026 04:01:38 +0000 (0:00:00.066) 0:00:48.396 ***** 2026-02-14 04:02:00.087704 | orchestrator | 2026-02-14 04:02:00.087710 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-14 04:02:00.087715 | orchestrator | Saturday 14 February 2026 04:01:38 +0000 (0:00:00.072) 0:00:48.469 ***** 2026-02-14 04:02:00.087721 | orchestrator | 2026-02-14 04:02:00.087726 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-14 04:02:00.087731 | orchestrator | Saturday 14 February 2026 04:01:38 +0000 (0:00:00.076) 0:00:48.545 ***** 2026-02-14 04:02:00.087737 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:02:00.087742 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:02:00.087748 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:02:00.087753 | orchestrator | 2026-02-14 04:02:00.087759 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:02:00.087765 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 04:02:00.087773 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-14 04:02:00.087778 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-14 04:02:00.087783 | orchestrator | 2026-02-14 04:02:00.087789 | orchestrator | 2026-02-14 04:02:00.087794 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:02:00.087800 | orchestrator | Saturday 14 February 2026 04:02:00 +0000 (0:00:21.836) 0:01:10.382 ***** 2026-02-14 04:02:00.087805 | orchestrator | =============================================================================== 2026-02-14 04:02:00.087811 | orchestrator | horizon : Restart horizon container ------------------------------------ 21.84s 2026-02-14 04:02:00.087816 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.94s 2026-02-14 04:02:00.087825 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.62s 2026-02-14 04:02:00.087831 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-02-14 04:02:00.087836 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.94s 2026-02-14 04:02:00.087842 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.90s 2026-02-14 04:02:00.087847 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2026-02-14 04:02:00.087852 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.81s 2026-02-14 04:02:00.087859 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-02-14 04:02:00.087866 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-02-14 04:02:00.087873 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-02-14 04:02:00.087879 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-02-14 04:02:00.087886 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-02-14 04:02:00.087901 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-02-14 04:02:00.450287 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-02-14 04:02:00.450425 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-02-14 04:02:00.450438 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-02-14 04:02:00.450448 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-02-14 04:02:00.450458 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-02-14 04:02:00.450467 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-02-14 04:02:02.754591 | orchestrator | 2026-02-14 04:02:02 | INFO  | Task a97afd40-f448-495c-ab43-42ecadd5a5b8 (skyline) was prepared for execution. 2026-02-14 04:02:02.754715 | orchestrator | 2026-02-14 04:02:02 | INFO  | It takes a moment until task a97afd40-f448-495c-ab43-42ecadd5a5b8 (skyline) has been started and output is visible here. 2026-02-14 04:02:33.494851 | orchestrator | 2026-02-14 04:02:33.495003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:02:33.495033 | orchestrator | 2026-02-14 04:02:33.495054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:02:33.495075 | orchestrator | Saturday 14 February 2026 04:02:06 +0000 (0:00:00.257) 0:00:00.257 ***** 2026-02-14 04:02:33.495095 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:02:33.495117 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:02:33.495137 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:02:33.495157 | orchestrator | 2026-02-14 04:02:33.495211 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:02:33.495236 | orchestrator | Saturday 14 February 2026 04:02:07 +0000 (0:00:00.341) 0:00:00.598 ***** 2026-02-14 04:02:33.495257 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-14 04:02:33.495277 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-14 04:02:33.495298 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-14 04:02:33.495320 | orchestrator | 2026-02-14 04:02:33.495341 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-14 04:02:33.495365 | orchestrator | 2026-02-14 04:02:33.495389 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-14 04:02:33.495411 | orchestrator | Saturday 14 February 2026 04:02:07 +0000 (0:00:00.459) 0:00:01.058 ***** 2026-02-14 04:02:33.495435 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:02:33.495459 | orchestrator | 2026-02-14 04:02:33.495481 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-14 04:02:33.495503 | orchestrator | Saturday 14 February 2026 04:02:08 +0000 (0:00:00.544) 0:00:01.602 ***** 2026-02-14 04:02:33.495523 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-14 04:02:33.495542 | orchestrator | 2026-02-14 04:02:33.495561 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-14 04:02:33.495581 | orchestrator | Saturday 14 February 2026 04:02:11 +0000 (0:00:03.193) 0:00:04.795 ***** 2026-02-14 04:02:33.495600 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-14 04:02:33.495618 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-14 04:02:33.495637 | orchestrator | 2026-02-14 04:02:33.495655 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-14 04:02:33.495674 | orchestrator | Saturday 14 February 2026 04:02:17 +0000 (0:00:06.324) 0:00:11.119 ***** 2026-02-14 04:02:33.495694 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:02:33.495714 | orchestrator | 2026-02-14 04:02:33.495733 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-14 04:02:33.495799 | orchestrator | Saturday 14 February 2026 04:02:21 +0000 (0:00:03.317) 0:00:14.437 ***** 2026-02-14 04:02:33.495821 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:02:33.495841 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-14 04:02:33.495859 | orchestrator | 2026-02-14 04:02:33.495878 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-14 04:02:33.495897 | orchestrator | Saturday 14 February 2026 04:02:25 +0000 (0:00:03.996) 0:00:18.434 ***** 2026-02-14 04:02:33.495935 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:02:33.495957 | orchestrator | 2026-02-14 04:02:33.495969 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-14 04:02:33.495980 | orchestrator | Saturday 14 February 2026 04:02:28 +0000 (0:00:03.283) 0:00:21.718 ***** 2026-02-14 04:02:33.495991 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-14 04:02:33.496001 | orchestrator | 2026-02-14 04:02:33.496012 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-14 04:02:33.496024 | orchestrator | Saturday 14 February 2026 04:02:32 +0000 (0:00:03.835) 0:00:25.553 ***** 2026-02-14 04:02:33.496038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:33.496080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:33.496094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:33.496124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:33.496137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:33.496159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399589 | orchestrator | 2026-02-14 04:02:37.399693 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-14 04:02:37.399711 | orchestrator | Saturday 14 February 2026 04:02:33 +0000 (0:00:01.304) 0:00:26.858 ***** 2026-02-14 04:02:37.399724 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:02:37.399736 | orchestrator | 2026-02-14 04:02:37.399747 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-14 04:02:37.399759 | orchestrator | Saturday 14 February 2026 04:02:34 +0000 (0:00:00.730) 0:00:27.589 ***** 2026-02-14 04:02:37.399772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:37.399917 | orchestrator | 2026-02-14 04:02:37.399929 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-14 04:02:37.399945 | orchestrator | Saturday 14 February 2026 04:02:36 +0000 (0:00:02.526) 0:00:30.115 ***** 2026-02-14 04:02:37.399987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:37.400001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:37.400012 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:02:37.400033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649735 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:02:38.649772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649798 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:02:38.649859 | orchestrator | 2026-02-14 04:02:38.649875 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-14 04:02:38.649887 | orchestrator | Saturday 14 February 2026 04:02:37 +0000 (0:00:00.653) 0:00:30.769 ***** 2026-02-14 04:02:38.649899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649968 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:02:38.649986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.649999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.650010 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:02:38.650101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-14 04:02:38.650134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-14 04:02:47.040324 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:02:47.040436 | orchestrator | 2026-02-14 04:02:47.040454 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-14 04:02:47.040468 | orchestrator | Saturday 14 February 2026 04:02:38 +0000 (0:00:01.246) 0:00:32.015 ***** 2026-02-14 04:02:47.040500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040631 | orchestrator | 2026-02-14 04:02:47.040643 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-14 04:02:47.040655 | orchestrator | Saturday 14 February 2026 04:02:41 +0000 (0:00:02.418) 0:00:34.433 ***** 2026-02-14 04:02:47.040667 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-14 04:02:47.040678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-14 04:02:47.040689 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-14 04:02:47.040701 | orchestrator | 2026-02-14 04:02:47.040712 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-14 04:02:47.040732 | orchestrator | Saturday 14 February 2026 04:02:42 +0000 (0:00:01.565) 0:00:35.998 ***** 2026-02-14 04:02:47.040743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-14 04:02:47.040755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-14 04:02:47.040766 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-14 04:02:47.040778 | orchestrator | 2026-02-14 04:02:47.040789 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-14 04:02:47.040800 | orchestrator | Saturday 14 February 2026 04:02:44 +0000 (0:00:02.154) 0:00:38.153 ***** 2026-02-14 04:02:47.040813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:47.040835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.117838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.117926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.117967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.117975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.117989 | orchestrator | 2026-02-14 04:02:49.117997 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-14 04:02:49.118005 | orchestrator | Saturday 14 February 2026 04:02:47 +0000 (0:00:02.255) 0:00:40.408 ***** 2026-02-14 04:02:49.118010 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:02:49.118055 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:02:49.118061 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:02:49.118066 | orchestrator | 2026-02-14 04:02:49.118086 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-14 04:02:49.118097 | orchestrator | Saturday 14 February 2026 04:02:47 +0000 (0:00:00.348) 0:00:40.757 ***** 2026-02-14 04:02:49.118103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.118115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.118147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.118157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:02:49.118207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:03:22.543957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-14 04:03:22.544163 | orchestrator | 2026-02-14 04:03:22.544182 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-14 04:03:22.544195 | orchestrator | Saturday 14 February 2026 04:02:49 +0000 (0:00:01.722) 0:00:42.479 ***** 2026-02-14 04:03:22.544207 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:03:22.544218 | orchestrator | 2026-02-14 04:03:22.544230 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-14 04:03:22.544240 | orchestrator | Saturday 14 February 2026 04:02:51 +0000 (0:00:02.053) 0:00:44.533 ***** 2026-02-14 04:03:22.544251 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:03:22.544262 | orchestrator | 2026-02-14 04:03:22.544273 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-14 04:03:22.544284 | orchestrator | Saturday 14 February 2026 04:02:53 +0000 (0:00:02.122) 0:00:46.656 ***** 2026-02-14 04:03:22.544295 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:03:22.544306 | orchestrator | 2026-02-14 04:03:22.544317 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-14 04:03:22.544328 | orchestrator | Saturday 14 February 2026 04:03:00 +0000 (0:00:07.681) 0:00:54.337 ***** 2026-02-14 04:03:22.544339 | orchestrator | 2026-02-14 04:03:22.544350 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-14 04:03:22.544360 | orchestrator | Saturday 14 February 2026 04:03:01 +0000 (0:00:00.070) 0:00:54.407 ***** 2026-02-14 04:03:22.544371 | orchestrator | 2026-02-14 04:03:22.544382 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-14 04:03:22.544393 | orchestrator | Saturday 14 February 2026 04:03:01 +0000 (0:00:00.067) 0:00:54.475 ***** 2026-02-14 04:03:22.544404 | orchestrator | 2026-02-14 04:03:22.544414 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-14 04:03:22.544425 | orchestrator | Saturday 14 February 2026 04:03:01 +0000 (0:00:00.069) 0:00:54.544 ***** 2026-02-14 04:03:22.544436 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:03:22.544447 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:03:22.544458 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:03:22.544468 | orchestrator | 2026-02-14 04:03:22.544479 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-14 04:03:22.544492 | orchestrator | Saturday 14 February 2026 04:03:12 +0000 (0:00:11.578) 0:01:06.123 ***** 2026-02-14 04:03:22.544505 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:03:22.544518 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:03:22.544530 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:03:22.544543 | orchestrator | 2026-02-14 04:03:22.544556 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:03:22.544570 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 04:03:22.544585 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 04:03:22.544598 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 04:03:22.544619 | orchestrator | 2026-02-14 04:03:22.544631 | orchestrator | 2026-02-14 04:03:22.544644 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:03:22.544656 | orchestrator | Saturday 14 February 2026 04:03:22 +0000 (0:00:09.447) 0:01:15.570 ***** 2026-02-14 04:03:22.544669 | orchestrator | =============================================================================== 2026-02-14 04:03:22.544697 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.58s 2026-02-14 04:03:22.544710 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.45s 2026-02-14 04:03:22.544723 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.68s 2026-02-14 04:03:22.544736 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.32s 2026-02-14 04:03:22.544748 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.00s 2026-02-14 04:03:22.544761 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.84s 2026-02-14 04:03:22.544773 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.32s 2026-02-14 04:03:22.544786 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.28s 2026-02-14 04:03:22.544819 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.19s 2026-02-14 04:03:22.544833 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.53s 2026-02-14 04:03:22.544847 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.42s 2026-02-14 04:03:22.544857 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.26s 2026-02-14 04:03:22.544868 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.15s 2026-02-14 04:03:22.544879 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.12s 2026-02-14 04:03:22.544890 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.05s 2026-02-14 04:03:22.544901 | orchestrator | skyline : Check skyline container --------------------------------------- 1.72s 2026-02-14 04:03:22.544911 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.57s 2026-02-14 04:03:22.544922 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.30s 2026-02-14 04:03:22.544933 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.25s 2026-02-14 04:03:22.544944 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.73s 2026-02-14 04:03:24.896303 | orchestrator | 2026-02-14 04:03:24 | INFO  | Task d461c434-9408-48f7-9ba3-6690a7b4ca44 (glance) was prepared for execution. 2026-02-14 04:03:24.896401 | orchestrator | 2026-02-14 04:03:24 | INFO  | It takes a moment until task d461c434-9408-48f7-9ba3-6690a7b4ca44 (glance) has been started and output is visible here. 2026-02-14 04:03:58.176544 | orchestrator | 2026-02-14 04:03:58.176658 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:03:58.176675 | orchestrator | 2026-02-14 04:03:58.176687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:03:58.176700 | orchestrator | Saturday 14 February 2026 04:03:28 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-14 04:03:58.176711 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:03:58.176724 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:03:58.176735 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:03:58.176746 | orchestrator | 2026-02-14 04:03:58.176758 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:03:58.176769 | orchestrator | Saturday 14 February 2026 04:03:29 +0000 (0:00:00.339) 0:00:00.602 ***** 2026-02-14 04:03:58.176780 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-14 04:03:58.176792 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-14 04:03:58.176803 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-14 04:03:58.176838 | orchestrator | 2026-02-14 04:03:58.176849 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-14 04:03:58.176861 | orchestrator | 2026-02-14 04:03:58.176872 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-14 04:03:58.176936 | orchestrator | Saturday 14 February 2026 04:03:29 +0000 (0:00:00.450) 0:00:01.052 ***** 2026-02-14 04:03:58.176948 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:03:58.176960 | orchestrator | 2026-02-14 04:03:58.176971 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-14 04:03:58.176981 | orchestrator | Saturday 14 February 2026 04:03:30 +0000 (0:00:00.546) 0:00:01.599 ***** 2026-02-14 04:03:58.176992 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-14 04:03:58.177003 | orchestrator | 2026-02-14 04:03:58.177014 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-14 04:03:58.177025 | orchestrator | Saturday 14 February 2026 04:03:33 +0000 (0:00:03.346) 0:00:04.946 ***** 2026-02-14 04:03:58.177036 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-14 04:03:58.177047 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-14 04:03:58.177058 | orchestrator | 2026-02-14 04:03:58.177070 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-14 04:03:58.177083 | orchestrator | Saturday 14 February 2026 04:03:40 +0000 (0:00:06.359) 0:00:11.305 ***** 2026-02-14 04:03:58.177096 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:03:58.177109 | orchestrator | 2026-02-14 04:03:58.177122 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-14 04:03:58.177134 | orchestrator | Saturday 14 February 2026 04:03:43 +0000 (0:00:03.236) 0:00:14.542 ***** 2026-02-14 04:03:58.177147 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:03:58.177160 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-14 04:03:58.177173 | orchestrator | 2026-02-14 04:03:58.177185 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-14 04:03:58.177212 | orchestrator | Saturday 14 February 2026 04:03:47 +0000 (0:00:03.970) 0:00:18.512 ***** 2026-02-14 04:03:58.177226 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:03:58.177238 | orchestrator | 2026-02-14 04:03:58.177251 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-14 04:03:58.177264 | orchestrator | Saturday 14 February 2026 04:03:50 +0000 (0:00:03.170) 0:00:21.683 ***** 2026-02-14 04:03:58.177277 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-14 04:03:58.177290 | orchestrator | 2026-02-14 04:03:58.177302 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-14 04:03:58.177314 | orchestrator | Saturday 14 February 2026 04:03:54 +0000 (0:00:03.684) 0:00:25.368 ***** 2026-02-14 04:03:58.177353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:03:58.177382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:03:58.177402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:03:58.177423 | orchestrator | 2026-02-14 04:03:58.177436 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-14 04:03:58.177447 | orchestrator | Saturday 14 February 2026 04:03:57 +0000 (0:00:03.357) 0:00:28.725 ***** 2026-02-14 04:03:58.177459 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:03:58.177470 | orchestrator | 2026-02-14 04:03:58.177487 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-14 04:04:13.243976 | orchestrator | Saturday 14 February 2026 04:03:58 +0000 (0:00:00.702) 0:00:29.427 ***** 2026-02-14 04:04:13.244096 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:04:13.244113 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:04:13.244125 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:04:13.244137 | orchestrator | 2026-02-14 04:04:13.244149 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-14 04:04:13.244161 | orchestrator | Saturday 14 February 2026 04:04:01 +0000 (0:00:03.528) 0:00:32.956 ***** 2026-02-14 04:04:13.244173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244197 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244207 | orchestrator | 2026-02-14 04:04:13.244218 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-14 04:04:13.244229 | orchestrator | Saturday 14 February 2026 04:04:03 +0000 (0:00:01.623) 0:00:34.579 ***** 2026-02-14 04:04:13.244240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244262 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:04:13.244273 | orchestrator | 2026-02-14 04:04:13.244283 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-14 04:04:13.244294 | orchestrator | Saturday 14 February 2026 04:04:04 +0000 (0:00:01.470) 0:00:36.049 ***** 2026-02-14 04:04:13.244305 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:04:13.244317 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:04:13.244328 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:04:13.244338 | orchestrator | 2026-02-14 04:04:13.244349 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-14 04:04:13.244361 | orchestrator | Saturday 14 February 2026 04:04:05 +0000 (0:00:00.734) 0:00:36.784 ***** 2026-02-14 04:04:13.244372 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:13.244383 | orchestrator | 2026-02-14 04:04:13.244394 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-14 04:04:13.244405 | orchestrator | Saturday 14 February 2026 04:04:05 +0000 (0:00:00.130) 0:00:36.915 ***** 2026-02-14 04:04:13.244415 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:13.244426 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:13.244437 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:13.244448 | orchestrator | 2026-02-14 04:04:13.244459 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-14 04:04:13.244472 | orchestrator | Saturday 14 February 2026 04:04:05 +0000 (0:00:00.291) 0:00:37.207 ***** 2026-02-14 04:04:13.244500 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:04:13.244513 | orchestrator | 2026-02-14 04:04:13.244525 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-14 04:04:13.244537 | orchestrator | Saturday 14 February 2026 04:04:06 +0000 (0:00:00.728) 0:00:37.936 ***** 2026-02-14 04:04:13.244578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:13.244614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:13.244634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:13.244656 | orchestrator | 2026-02-14 04:04:13.244667 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-14 04:04:13.244678 | orchestrator | Saturday 14 February 2026 04:04:10 +0000 (0:00:03.703) 0:00:41.640 ***** 2026-02-14 04:04:13.244700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:16.719240 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:16.719367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:16.719410 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:16.719425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:16.719437 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:16.719448 | orchestrator | 2026-02-14 04:04:16.719460 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-14 04:04:16.719472 | orchestrator | Saturday 14 February 2026 04:04:13 +0000 (0:00:02.862) 0:00:44.502 ***** 2026-02-14 04:04:16.719510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:16.719531 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:16.719544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:16.719556 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:16.719577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 04:04:50.087567 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.087713 | orchestrator | 2026-02-14 04:04:50.087799 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-14 04:04:50.087814 | orchestrator | Saturday 14 February 2026 04:04:16 +0000 (0:00:03.475) 0:00:47.977 ***** 2026-02-14 04:04:50.087826 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.087838 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.087849 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.087860 | orchestrator | 2026-02-14 04:04:50.087889 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-14 04:04:50.087901 | orchestrator | Saturday 14 February 2026 04:04:19 +0000 (0:00:03.167) 0:00:51.145 ***** 2026-02-14 04:04:50.087917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:50.087934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:50.088003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:04:50.088019 | orchestrator | 2026-02-14 04:04:50.088031 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-14 04:04:50.088043 | orchestrator | Saturday 14 February 2026 04:04:23 +0000 (0:00:03.938) 0:00:55.083 ***** 2026-02-14 04:04:50.088055 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:04:50.088066 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:04:50.088077 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:04:50.088089 | orchestrator | 2026-02-14 04:04:50.088100 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-14 04:04:50.088111 | orchestrator | Saturday 14 February 2026 04:04:29 +0000 (0:00:05.396) 0:01:00.480 ***** 2026-02-14 04:04:50.088123 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088135 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088148 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088166 | orchestrator | 2026-02-14 04:04:50.088184 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-14 04:04:50.088202 | orchestrator | Saturday 14 February 2026 04:04:32 +0000 (0:00:03.340) 0:01:03.820 ***** 2026-02-14 04:04:50.088221 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088238 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088257 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088275 | orchestrator | 2026-02-14 04:04:50.088294 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-14 04:04:50.088310 | orchestrator | Saturday 14 February 2026 04:04:35 +0000 (0:00:03.270) 0:01:07.091 ***** 2026-02-14 04:04:50.088321 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088332 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088343 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088354 | orchestrator | 2026-02-14 04:04:50.088365 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-14 04:04:50.088376 | orchestrator | Saturday 14 February 2026 04:04:38 +0000 (0:00:03.148) 0:01:10.239 ***** 2026-02-14 04:04:50.088387 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088398 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088409 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088431 | orchestrator | 2026-02-14 04:04:50.088442 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-14 04:04:50.088460 | orchestrator | Saturday 14 February 2026 04:04:42 +0000 (0:00:03.327) 0:01:13.567 ***** 2026-02-14 04:04:50.088487 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088508 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088526 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088546 | orchestrator | 2026-02-14 04:04:50.088564 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-14 04:04:50.088581 | orchestrator | Saturday 14 February 2026 04:04:42 +0000 (0:00:00.528) 0:01:14.095 ***** 2026-02-14 04:04:50.088596 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-14 04:04:50.088612 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:04:50.088630 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-14 04:04:50.088648 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:04:50.088666 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-14 04:04:50.088684 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:04:50.088703 | orchestrator | 2026-02-14 04:04:50.088756 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-14 04:04:50.088777 | orchestrator | Saturday 14 February 2026 04:04:45 +0000 (0:00:03.109) 0:01:17.205 ***** 2026-02-14 04:04:50.088790 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:04:50.088801 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:04:50.088813 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:04:50.088824 | orchestrator | 2026-02-14 04:04:50.088835 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-14 04:04:50.088858 | orchestrator | Saturday 14 February 2026 04:04:50 +0000 (0:00:04.136) 0:01:21.341 ***** 2026-02-14 04:06:02.804036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:06:02.804155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:06:02.804223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 04:06:02.804239 | orchestrator | 2026-02-14 04:06:02.804253 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-14 04:06:02.804266 | orchestrator | Saturday 14 February 2026 04:04:53 +0000 (0:00:03.748) 0:01:25.089 ***** 2026-02-14 04:06:02.804278 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:02.804290 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:06:02.804301 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:06:02.804312 | orchestrator | 2026-02-14 04:06:02.804323 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-14 04:06:02.804334 | orchestrator | Saturday 14 February 2026 04:04:54 +0000 (0:00:00.536) 0:01:25.626 ***** 2026-02-14 04:06:02.804345 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804356 | orchestrator | 2026-02-14 04:06:02.804368 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-14 04:06:02.804380 | orchestrator | Saturday 14 February 2026 04:04:56 +0000 (0:00:02.091) 0:01:27.717 ***** 2026-02-14 04:06:02.804399 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804411 | orchestrator | 2026-02-14 04:06:02.804422 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-14 04:06:02.804433 | orchestrator | Saturday 14 February 2026 04:04:58 +0000 (0:00:02.235) 0:01:29.953 ***** 2026-02-14 04:06:02.804444 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804455 | orchestrator | 2026-02-14 04:06:02.804466 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-14 04:06:02.804477 | orchestrator | Saturday 14 February 2026 04:05:00 +0000 (0:00:02.100) 0:01:32.054 ***** 2026-02-14 04:06:02.804488 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804499 | orchestrator | 2026-02-14 04:06:02.804509 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-14 04:06:02.804520 | orchestrator | Saturday 14 February 2026 04:05:28 +0000 (0:00:27.604) 0:01:59.658 ***** 2026-02-14 04:06:02.804531 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804543 | orchestrator | 2026-02-14 04:06:02.804557 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-14 04:06:02.804598 | orchestrator | Saturday 14 February 2026 04:05:30 +0000 (0:00:02.013) 0:02:01.672 ***** 2026-02-14 04:06:02.804610 | orchestrator | 2026-02-14 04:06:02.804623 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-14 04:06:02.804635 | orchestrator | Saturday 14 February 2026 04:05:30 +0000 (0:00:00.068) 0:02:01.740 ***** 2026-02-14 04:06:02.804648 | orchestrator | 2026-02-14 04:06:02.804661 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-14 04:06:02.804673 | orchestrator | Saturday 14 February 2026 04:05:30 +0000 (0:00:00.069) 0:02:01.810 ***** 2026-02-14 04:06:02.804685 | orchestrator | 2026-02-14 04:06:02.804697 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-14 04:06:02.804710 | orchestrator | Saturday 14 February 2026 04:05:30 +0000 (0:00:00.068) 0:02:01.878 ***** 2026-02-14 04:06:02.804722 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:06:02.804734 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:06:02.804746 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:06:02.804759 | orchestrator | 2026-02-14 04:06:02.804771 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:06:02.804786 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 04:06:02.804799 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-14 04:06:02.804811 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-14 04:06:02.804824 | orchestrator | 2026-02-14 04:06:02.804837 | orchestrator | 2026-02-14 04:06:02.804850 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:06:02.804863 | orchestrator | Saturday 14 February 2026 04:06:02 +0000 (0:00:32.176) 0:02:34.055 ***** 2026-02-14 04:06:02.804875 | orchestrator | =============================================================================== 2026-02-14 04:06:02.804887 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.18s 2026-02-14 04:06:02.804900 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.60s 2026-02-14 04:06:02.804911 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.36s 2026-02-14 04:06:02.804930 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.40s 2026-02-14 04:06:03.134974 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.14s 2026-02-14 04:06:03.135069 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.97s 2026-02-14 04:06:03.135086 | orchestrator | glance : Copying over config.json files for services -------------------- 3.94s 2026-02-14 04:06:03.135140 | orchestrator | glance : Check glance containers ---------------------------------------- 3.75s 2026-02-14 04:06:03.135154 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.70s 2026-02-14 04:06:03.135165 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.68s 2026-02-14 04:06:03.135176 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.53s 2026-02-14 04:06:03.135187 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.47s 2026-02-14 04:06:03.135199 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.36s 2026-02-14 04:06:03.135211 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.35s 2026-02-14 04:06:03.135222 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.34s 2026-02-14 04:06:03.135233 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.33s 2026-02-14 04:06:03.135244 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.27s 2026-02-14 04:06:03.135255 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.24s 2026-02-14 04:06:03.135267 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.17s 2026-02-14 04:06:03.135278 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.17s 2026-02-14 04:06:05.455695 | orchestrator | 2026-02-14 04:06:05 | INFO  | Task d1b4d77c-8c1b-4360-8385-2efbe17a6a51 (cinder) was prepared for execution. 2026-02-14 04:06:05.455793 | orchestrator | 2026-02-14 04:06:05 | INFO  | It takes a moment until task d1b4d77c-8c1b-4360-8385-2efbe17a6a51 (cinder) has been started and output is visible here. 2026-02-14 04:06:40.321020 | orchestrator | 2026-02-14 04:06:40.321120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:06:40.321132 | orchestrator | 2026-02-14 04:06:40.321141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:06:40.321150 | orchestrator | Saturday 14 February 2026 04:06:09 +0000 (0:00:00.249) 0:00:00.249 ***** 2026-02-14 04:06:40.321159 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:06:40.321168 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:06:40.321176 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:06:40.321184 | orchestrator | 2026-02-14 04:06:40.321193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:06:40.321201 | orchestrator | Saturday 14 February 2026 04:06:09 +0000 (0:00:00.318) 0:00:00.568 ***** 2026-02-14 04:06:40.321209 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-14 04:06:40.321218 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-14 04:06:40.321226 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-14 04:06:40.321234 | orchestrator | 2026-02-14 04:06:40.321245 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-14 04:06:40.321257 | orchestrator | 2026-02-14 04:06:40.321271 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-14 04:06:40.321284 | orchestrator | Saturday 14 February 2026 04:06:10 +0000 (0:00:00.463) 0:00:01.032 ***** 2026-02-14 04:06:40.321305 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:06:40.321319 | orchestrator | 2026-02-14 04:06:40.321330 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-14 04:06:40.321343 | orchestrator | Saturday 14 February 2026 04:06:10 +0000 (0:00:00.585) 0:00:01.618 ***** 2026-02-14 04:06:40.321357 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-14 04:06:40.321369 | orchestrator | 2026-02-14 04:06:40.321381 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-14 04:06:40.321393 | orchestrator | Saturday 14 February 2026 04:06:14 +0000 (0:00:03.580) 0:00:05.199 ***** 2026-02-14 04:06:40.321406 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-14 04:06:40.321446 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-14 04:06:40.321458 | orchestrator | 2026-02-14 04:06:40.321533 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-14 04:06:40.321548 | orchestrator | Saturday 14 February 2026 04:06:20 +0000 (0:00:06.389) 0:00:11.588 ***** 2026-02-14 04:06:40.321561 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:06:40.321574 | orchestrator | 2026-02-14 04:06:40.321586 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-14 04:06:40.321599 | orchestrator | Saturday 14 February 2026 04:06:24 +0000 (0:00:03.125) 0:00:14.713 ***** 2026-02-14 04:06:40.321610 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:06:40.321622 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-14 04:06:40.321634 | orchestrator | 2026-02-14 04:06:40.321645 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-14 04:06:40.321657 | orchestrator | Saturday 14 February 2026 04:06:28 +0000 (0:00:03.973) 0:00:18.687 ***** 2026-02-14 04:06:40.321669 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:06:40.321681 | orchestrator | 2026-02-14 04:06:40.321694 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-14 04:06:40.321707 | orchestrator | Saturday 14 February 2026 04:06:31 +0000 (0:00:03.157) 0:00:21.844 ***** 2026-02-14 04:06:40.321720 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-14 04:06:40.321735 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-14 04:06:40.321748 | orchestrator | 2026-02-14 04:06:40.321779 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-14 04:06:40.321789 | orchestrator | Saturday 14 February 2026 04:06:38 +0000 (0:00:07.251) 0:00:29.095 ***** 2026-02-14 04:06:40.321800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:40.321832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:40.321842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:40.321861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:40.321871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:40.321884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:40.321894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:40.321908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:46.058571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:46.058696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:46.058730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:46.058743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:46.058755 | orchestrator | 2026-02-14 04:06:46.058769 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-14 04:06:46.058782 | orchestrator | Saturday 14 February 2026 04:06:40 +0000 (0:00:01.959) 0:00:31.055 ***** 2026-02-14 04:06:46.058793 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:46.058805 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:06:46.058816 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:06:46.058827 | orchestrator | 2026-02-14 04:06:46.058838 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-14 04:06:46.058849 | orchestrator | Saturday 14 February 2026 04:06:40 +0000 (0:00:00.499) 0:00:31.555 ***** 2026-02-14 04:06:46.058861 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:06:46.058872 | orchestrator | 2026-02-14 04:06:46.058883 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-14 04:06:46.058895 | orchestrator | Saturday 14 February 2026 04:06:41 +0000 (0:00:00.535) 0:00:32.090 ***** 2026-02-14 04:06:46.058929 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-14 04:06:46.058941 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-14 04:06:46.058952 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-14 04:06:46.058963 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-14 04:06:46.058974 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-14 04:06:46.058984 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-14 04:06:46.058995 | orchestrator | 2026-02-14 04:06:46.059006 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-14 04:06:46.059017 | orchestrator | Saturday 14 February 2026 04:06:43 +0000 (0:00:01.606) 0:00:33.696 ***** 2026-02-14 04:06:46.059048 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:46.059064 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:46.059082 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:46.059094 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:46.059121 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:56.730980 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-14 04:06:56.731809 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731840 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731845 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731865 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731882 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731887 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-14 04:06:56.731891 | orchestrator | 2026-02-14 04:06:56.731896 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-14 04:06:56.731901 | orchestrator | Saturday 14 February 2026 04:06:46 +0000 (0:00:03.281) 0:00:36.978 ***** 2026-02-14 04:06:56.731906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:06:56.731911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:06:56.731915 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-14 04:06:56.731918 | orchestrator | 2026-02-14 04:06:56.731922 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-14 04:06:56.731926 | orchestrator | Saturday 14 February 2026 04:06:47 +0000 (0:00:01.505) 0:00:38.484 ***** 2026-02-14 04:06:56.731931 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-14 04:06:56.731938 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-14 04:06:56.731942 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-14 04:06:56.731946 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 04:06:56.731950 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 04:06:56.731953 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-14 04:06:56.731957 | orchestrator | 2026-02-14 04:06:56.731964 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-14 04:06:56.731968 | orchestrator | Saturday 14 February 2026 04:06:50 +0000 (0:00:02.694) 0:00:41.179 ***** 2026-02-14 04:06:56.731973 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-14 04:06:56.731977 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-14 04:06:56.731981 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-14 04:06:56.731985 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-14 04:06:56.731988 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-14 04:06:56.731992 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-14 04:06:56.731996 | orchestrator | 2026-02-14 04:06:56.732000 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-14 04:06:56.732003 | orchestrator | Saturday 14 February 2026 04:06:51 +0000 (0:00:01.069) 0:00:42.248 ***** 2026-02-14 04:06:56.732007 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:56.732011 | orchestrator | 2026-02-14 04:06:56.732015 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-14 04:06:56.732019 | orchestrator | Saturday 14 February 2026 04:06:51 +0000 (0:00:00.117) 0:00:42.366 ***** 2026-02-14 04:06:56.732023 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:56.732027 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:06:56.732030 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:06:56.732034 | orchestrator | 2026-02-14 04:06:56.732038 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-14 04:06:56.732041 | orchestrator | Saturday 14 February 2026 04:06:52 +0000 (0:00:00.502) 0:00:42.869 ***** 2026-02-14 04:06:56.732046 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:06:56.732050 | orchestrator | 2026-02-14 04:06:56.732054 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-14 04:06:56.732057 | orchestrator | Saturday 14 February 2026 04:06:52 +0000 (0:00:00.581) 0:00:43.450 ***** 2026-02-14 04:06:56.732066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:57.613562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:57.613678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:06:57.613716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:06:57.613857 | orchestrator | 2026-02-14 04:06:57.613870 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-14 04:06:57.613883 | orchestrator | Saturday 14 February 2026 04:06:56 +0000 (0:00:04.028) 0:00:47.479 ***** 2026-02-14 04:06:57.613903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:57.710403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710571 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:57.710586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:57.710600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710682 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:06:57.710695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:57.710707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:06:57.710752 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:06:57.710764 | orchestrator | 2026-02-14 04:06:57.710777 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-14 04:06:57.710798 | orchestrator | Saturday 14 February 2026 04:06:57 +0000 (0:00:00.887) 0:00:48.367 ***** 2026-02-14 04:06:58.282558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:58.282660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282704 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:06:58.282718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:58.282776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282819 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:06:58.282831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:06:58.282843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:06:58.282870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:07:02.830256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:07:02.830369 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:07:02.830387 | orchestrator | 2026-02-14 04:07:02.830400 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-14 04:07:02.830413 | orchestrator | Saturday 14 February 2026 04:06:58 +0000 (0:00:00.858) 0:00:49.225 ***** 2026-02-14 04:07:02.830489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:02.830503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:02.830536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:02.830567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:02.830661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260701 | orchestrator | 2026-02-14 04:07:15.260715 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-14 04:07:15.260729 | orchestrator | Saturday 14 February 2026 04:07:02 +0000 (0:00:04.350) 0:00:53.576 ***** 2026-02-14 04:07:15.260740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-14 04:07:15.260752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-14 04:07:15.260763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-14 04:07:15.260776 | orchestrator | 2026-02-14 04:07:15.260795 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-14 04:07:15.260813 | orchestrator | Saturday 14 February 2026 04:07:04 +0000 (0:00:01.845) 0:00:55.421 ***** 2026-02-14 04:07:15.260866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:15.260882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:15.260921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:15.260935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.260989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:15.261014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:17.679852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:17.679960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:17.680007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:17.680021 | orchestrator | 2026-02-14 04:07:17.680034 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-14 04:07:17.680047 | orchestrator | Saturday 14 February 2026 04:07:15 +0000 (0:00:10.572) 0:01:05.994 ***** 2026-02-14 04:07:17.680058 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:07:17.680070 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:07:17.680081 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:07:17.680092 | orchestrator | 2026-02-14 04:07:17.680103 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-14 04:07:17.680114 | orchestrator | Saturday 14 February 2026 04:07:16 +0000 (0:00:01.559) 0:01:07.554 ***** 2026-02-14 04:07:17.680127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:07:17.680155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:07:17.680186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:07:17.680206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:07:17.680237 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:07:17.680249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:07:17.680260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:07:17.680272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:07:17.680298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:07:21.229865 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:07:21.229977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-14 04:07:21.230094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:07:21.230113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 04:07:21.230125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 04:07:21.230137 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:07:21.230150 | orchestrator | 2026-02-14 04:07:21.230162 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-14 04:07:21.230176 | orchestrator | Saturday 14 February 2026 04:07:17 +0000 (0:00:00.870) 0:01:08.424 ***** 2026-02-14 04:07:21.230187 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:07:21.230198 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:07:21.230209 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:07:21.230220 | orchestrator | 2026-02-14 04:07:21.230231 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-14 04:07:21.230242 | orchestrator | Saturday 14 February 2026 04:07:18 +0000 (0:00:00.537) 0:01:08.962 ***** 2026-02-14 04:07:21.230288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:21.230312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:21.230324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-14 04:07:21.230336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:21.230347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:21.230364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:07:21.230427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:00.479799 | orchestrator | 2026-02-14 04:09:00.479815 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-14 04:09:00.479828 | orchestrator | Saturday 14 February 2026 04:07:21 +0000 (0:00:03.013) 0:01:11.975 ***** 2026-02-14 04:09:00.479839 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:00.479851 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:09:00.479862 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:09:00.479872 | orchestrator | 2026-02-14 04:09:00.479883 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-14 04:09:00.479895 | orchestrator | Saturday 14 February 2026 04:07:21 +0000 (0:00:00.329) 0:01:12.305 ***** 2026-02-14 04:09:00.479906 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.479917 | orchestrator | 2026-02-14 04:09:00.479947 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-14 04:09:00.479959 | orchestrator | Saturday 14 February 2026 04:07:23 +0000 (0:00:02.083) 0:01:14.389 ***** 2026-02-14 04:09:00.479969 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.479980 | orchestrator | 2026-02-14 04:09:00.479991 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-14 04:09:00.480002 | orchestrator | Saturday 14 February 2026 04:07:25 +0000 (0:00:02.231) 0:01:16.621 ***** 2026-02-14 04:09:00.480013 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.480024 | orchestrator | 2026-02-14 04:09:00.480035 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-14 04:09:00.480046 | orchestrator | Saturday 14 February 2026 04:07:45 +0000 (0:00:19.267) 0:01:35.888 ***** 2026-02-14 04:09:00.480057 | orchestrator | 2026-02-14 04:09:00.480070 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-14 04:09:00.480082 | orchestrator | Saturday 14 February 2026 04:07:45 +0000 (0:00:00.068) 0:01:35.957 ***** 2026-02-14 04:09:00.480095 | orchestrator | 2026-02-14 04:09:00.480107 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-14 04:09:00.480120 | orchestrator | Saturday 14 February 2026 04:07:45 +0000 (0:00:00.068) 0:01:36.025 ***** 2026-02-14 04:09:00.480132 | orchestrator | 2026-02-14 04:09:00.480145 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-14 04:09:00.480158 | orchestrator | Saturday 14 February 2026 04:07:45 +0000 (0:00:00.071) 0:01:36.097 ***** 2026-02-14 04:09:00.480214 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.480228 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:09:00.480241 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:09:00.480254 | orchestrator | 2026-02-14 04:09:00.480267 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-14 04:09:00.480280 | orchestrator | Saturday 14 February 2026 04:08:16 +0000 (0:00:31.466) 0:02:07.564 ***** 2026-02-14 04:09:00.480293 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:09:00.480306 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:09:00.480320 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.480332 | orchestrator | 2026-02-14 04:09:00.480345 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-14 04:09:00.480358 | orchestrator | Saturday 14 February 2026 04:08:24 +0000 (0:00:08.059) 0:02:15.623 ***** 2026-02-14 04:09:00.480372 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.480383 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:09:00.480394 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:09:00.480405 | orchestrator | 2026-02-14 04:09:00.480426 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-14 04:09:00.480437 | orchestrator | Saturday 14 February 2026 04:08:51 +0000 (0:00:26.851) 0:02:42.474 ***** 2026-02-14 04:09:00.480448 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:09:00.480459 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:09:00.480470 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:09:00.480481 | orchestrator | 2026-02-14 04:09:00.480491 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-14 04:09:00.480503 | orchestrator | Saturday 14 February 2026 04:09:00 +0000 (0:00:08.349) 0:02:50.824 ***** 2026-02-14 04:09:00.480514 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:00.480525 | orchestrator | 2026-02-14 04:09:00.480536 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:09:00.480548 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-14 04:09:00.480561 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:09:00.480572 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:09:00.480583 | orchestrator | 2026-02-14 04:09:00.480593 | orchestrator | 2026-02-14 04:09:00.480604 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:09:00.480615 | orchestrator | Saturday 14 February 2026 04:09:00 +0000 (0:00:00.285) 0:02:51.110 ***** 2026-02-14 04:09:00.480626 | orchestrator | =============================================================================== 2026-02-14 04:09:00.480643 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.47s 2026-02-14 04:09:00.480662 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.85s 2026-02-14 04:09:00.480681 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.27s 2026-02-14 04:09:00.480699 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.57s 2026-02-14 04:09:00.480718 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.35s 2026-02-14 04:09:00.480735 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.06s 2026-02-14 04:09:00.480754 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.25s 2026-02-14 04:09:00.480773 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.39s 2026-02-14 04:09:00.480789 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.35s 2026-02-14 04:09:00.480809 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.03s 2026-02-14 04:09:00.480829 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.97s 2026-02-14 04:09:00.480847 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.58s 2026-02-14 04:09:00.480865 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.28s 2026-02-14 04:09:00.480885 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.16s 2026-02-14 04:09:00.480917 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.13s 2026-02-14 04:09:00.806385 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.01s 2026-02-14 04:09:00.806467 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.69s 2026-02-14 04:09:00.806477 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.23s 2026-02-14 04:09:00.806485 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.08s 2026-02-14 04:09:00.806493 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 1.96s 2026-02-14 04:09:03.045215 | orchestrator | 2026-02-14 04:09:03 | INFO  | Task 03f98673-8b98-4965-ad7f-a30cb346a30f (barbican) was prepared for execution. 2026-02-14 04:09:03.045344 | orchestrator | 2026-02-14 04:09:03 | INFO  | It takes a moment until task 03f98673-8b98-4965-ad7f-a30cb346a30f (barbican) has been started and output is visible here. 2026-02-14 04:09:46.737289 | orchestrator | 2026-02-14 04:09:46.737410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:09:46.737426 | orchestrator | 2026-02-14 04:09:46.737438 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:09:46.737450 | orchestrator | Saturday 14 February 2026 04:09:07 +0000 (0:00:00.258) 0:00:00.258 ***** 2026-02-14 04:09:46.737461 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:09:46.737474 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:09:46.737485 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:09:46.737495 | orchestrator | 2026-02-14 04:09:46.737506 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:09:46.737517 | orchestrator | Saturday 14 February 2026 04:09:07 +0000 (0:00:00.327) 0:00:00.585 ***** 2026-02-14 04:09:46.737528 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-14 04:09:46.737540 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-14 04:09:46.737550 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-14 04:09:46.737561 | orchestrator | 2026-02-14 04:09:46.737572 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-14 04:09:46.737583 | orchestrator | 2026-02-14 04:09:46.737594 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-14 04:09:46.737605 | orchestrator | Saturday 14 February 2026 04:09:07 +0000 (0:00:00.447) 0:00:01.033 ***** 2026-02-14 04:09:46.737616 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:09:46.737629 | orchestrator | 2026-02-14 04:09:46.737639 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-14 04:09:46.737650 | orchestrator | Saturday 14 February 2026 04:09:08 +0000 (0:00:00.535) 0:00:01.568 ***** 2026-02-14 04:09:46.737662 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-14 04:09:46.737673 | orchestrator | 2026-02-14 04:09:46.737684 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-14 04:09:46.737695 | orchestrator | Saturday 14 February 2026 04:09:11 +0000 (0:00:03.523) 0:00:05.092 ***** 2026-02-14 04:09:46.737705 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-14 04:09:46.737717 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-14 04:09:46.737728 | orchestrator | 2026-02-14 04:09:46.737738 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-14 04:09:46.737749 | orchestrator | Saturday 14 February 2026 04:09:18 +0000 (0:00:06.478) 0:00:11.570 ***** 2026-02-14 04:09:46.737760 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:09:46.737771 | orchestrator | 2026-02-14 04:09:46.737782 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-14 04:09:46.737793 | orchestrator | Saturday 14 February 2026 04:09:21 +0000 (0:00:03.285) 0:00:14.856 ***** 2026-02-14 04:09:46.737804 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:09:46.737816 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-14 04:09:46.737829 | orchestrator | 2026-02-14 04:09:46.737858 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-14 04:09:46.737872 | orchestrator | Saturday 14 February 2026 04:09:25 +0000 (0:00:04.034) 0:00:18.890 ***** 2026-02-14 04:09:46.737884 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:09:46.737897 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-14 04:09:46.737909 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-14 04:09:46.737946 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-14 04:09:46.737959 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-14 04:09:46.737972 | orchestrator | 2026-02-14 04:09:46.737984 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-14 04:09:46.737996 | orchestrator | Saturday 14 February 2026 04:09:41 +0000 (0:00:15.493) 0:00:34.384 ***** 2026-02-14 04:09:46.738009 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-14 04:09:46.738138 | orchestrator | 2026-02-14 04:09:46.738152 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-14 04:09:46.738164 | orchestrator | Saturday 14 February 2026 04:09:45 +0000 (0:00:03.822) 0:00:38.207 ***** 2026-02-14 04:09:46.738179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:46.738215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:46.738228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:46.738247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:46.738271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:46.738283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:46.738303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.571674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.571790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.571807 | orchestrator | 2026-02-14 04:09:52.571822 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-14 04:09:52.571835 | orchestrator | Saturday 14 February 2026 04:09:46 +0000 (0:00:01.629) 0:00:39.837 ***** 2026-02-14 04:09:52.571847 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-14 04:09:52.571858 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-14 04:09:52.571869 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-14 04:09:52.571902 | orchestrator | 2026-02-14 04:09:52.571913 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-14 04:09:52.571924 | orchestrator | Saturday 14 February 2026 04:09:47 +0000 (0:00:01.257) 0:00:41.094 ***** 2026-02-14 04:09:52.571935 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:52.571946 | orchestrator | 2026-02-14 04:09:52.571957 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-14 04:09:52.571968 | orchestrator | Saturday 14 February 2026 04:09:48 +0000 (0:00:00.320) 0:00:41.415 ***** 2026-02-14 04:09:52.571994 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:52.572005 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:09:52.572015 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:09:52.572026 | orchestrator | 2026-02-14 04:09:52.572037 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-14 04:09:52.572048 | orchestrator | Saturday 14 February 2026 04:09:48 +0000 (0:00:00.332) 0:00:41.747 ***** 2026-02-14 04:09:52.572059 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:09:52.572147 | orchestrator | 2026-02-14 04:09:52.572163 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-14 04:09:52.572174 | orchestrator | Saturday 14 February 2026 04:09:49 +0000 (0:00:00.546) 0:00:42.293 ***** 2026-02-14 04:09:52.572187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:52.572219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:52.572231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:52.572253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.572274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.572287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.572298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:52.572318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:54.061591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:09:54.061721 | orchestrator | 2026-02-14 04:09:54.061739 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-14 04:09:54.061752 | orchestrator | Saturday 14 February 2026 04:09:52 +0000 (0:00:03.376) 0:00:45.670 ***** 2026-02-14 04:09:54.061781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:54.061795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061820 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:54.061833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:54.061863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061894 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:09:54.061911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:54.061923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:54.061946 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:09:54.061958 | orchestrator | 2026-02-14 04:09:54.061969 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-14 04:09:54.061980 | orchestrator | Saturday 14 February 2026 04:09:53 +0000 (0:00:00.640) 0:00:46.310 ***** 2026-02-14 04:09:54.062001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:57.725918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.726978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.727021 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:09:57.727037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:57.727050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.727098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.727133 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:09:57.727169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:09:57.727183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.727202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:09:57.727214 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:09:57.727225 | orchestrator | 2026-02-14 04:09:57.727237 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-14 04:09:57.727250 | orchestrator | Saturday 14 February 2026 04:09:54 +0000 (0:00:00.856) 0:00:47.167 ***** 2026-02-14 04:09:57.727262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:57.727275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:09:57.727302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:07.432634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:07.432835 | orchestrator | 2026-02-14 04:10:07.432848 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-14 04:10:07.432861 | orchestrator | Saturday 14 February 2026 04:09:57 +0000 (0:00:03.662) 0:00:50.829 ***** 2026-02-14 04:10:07.432873 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:10:07.432885 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:07.432897 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:10:07.432908 | orchestrator | 2026-02-14 04:10:07.432936 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-14 04:10:07.432948 | orchestrator | Saturday 14 February 2026 04:09:59 +0000 (0:00:01.525) 0:00:52.355 ***** 2026-02-14 04:10:07.432960 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:10:07.432971 | orchestrator | 2026-02-14 04:10:07.432982 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-14 04:10:07.432992 | orchestrator | Saturday 14 February 2026 04:10:00 +0000 (0:00:00.899) 0:00:53.255 ***** 2026-02-14 04:10:07.433003 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:10:07.433014 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:10:07.433025 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:10:07.433035 | orchestrator | 2026-02-14 04:10:07.433118 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-14 04:10:07.433132 | orchestrator | Saturday 14 February 2026 04:10:00 +0000 (0:00:00.605) 0:00:53.860 ***** 2026-02-14 04:10:07.433195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:07.433221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:07.433255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:07.433288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:08.384745 | orchestrator | 2026-02-14 04:10:08.384758 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-14 04:10:08.384771 | orchestrator | Saturday 14 February 2026 04:10:07 +0000 (0:00:06.677) 0:01:00.538 ***** 2026-02-14 04:10:08.384799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:10:08.384819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:10:08.384832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:10:08.384857 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:10:08.384870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:10:08.384882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:10:08.384894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:10:08.384905 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:10:08.384931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-14 04:10:10.723022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:10:10.723177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:10:10.723196 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:10:10.723210 | orchestrator | 2026-02-14 04:10:10.723223 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-14 04:10:10.723236 | orchestrator | Saturday 14 February 2026 04:10:08 +0000 (0:00:00.951) 0:01:01.489 ***** 2026-02-14 04:10:10.723248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:10.723261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:10.723306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-14 04:10:10.723320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:10:10.723397 | orchestrator | 2026-02-14 04:10:10.723414 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-14 04:10:10.723438 | orchestrator | Saturday 14 February 2026 04:10:10 +0000 (0:00:02.330) 0:01:03.820 ***** 2026-02-14 04:10:54.429203 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:10:54.429347 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:10:54.429365 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:10:54.429377 | orchestrator | 2026-02-14 04:10:54.429390 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-14 04:10:54.429403 | orchestrator | Saturday 14 February 2026 04:10:11 +0000 (0:00:00.305) 0:01:04.126 ***** 2026-02-14 04:10:54.429414 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429425 | orchestrator | 2026-02-14 04:10:54.429436 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-14 04:10:54.429448 | orchestrator | Saturday 14 February 2026 04:10:13 +0000 (0:00:02.242) 0:01:06.369 ***** 2026-02-14 04:10:54.429459 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429470 | orchestrator | 2026-02-14 04:10:54.429481 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-14 04:10:54.429492 | orchestrator | Saturday 14 February 2026 04:10:15 +0000 (0:00:02.170) 0:01:08.539 ***** 2026-02-14 04:10:54.429503 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429514 | orchestrator | 2026-02-14 04:10:54.429525 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-14 04:10:54.429536 | orchestrator | Saturday 14 February 2026 04:10:27 +0000 (0:00:12.130) 0:01:20.670 ***** 2026-02-14 04:10:54.429546 | orchestrator | 2026-02-14 04:10:54.429558 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-14 04:10:54.429569 | orchestrator | Saturday 14 February 2026 04:10:27 +0000 (0:00:00.087) 0:01:20.758 ***** 2026-02-14 04:10:54.429579 | orchestrator | 2026-02-14 04:10:54.429590 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-14 04:10:54.429601 | orchestrator | Saturday 14 February 2026 04:10:27 +0000 (0:00:00.082) 0:01:20.840 ***** 2026-02-14 04:10:54.429612 | orchestrator | 2026-02-14 04:10:54.429623 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-14 04:10:54.429634 | orchestrator | Saturday 14 February 2026 04:10:27 +0000 (0:00:00.075) 0:01:20.915 ***** 2026-02-14 04:10:54.429645 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429656 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:10:54.429667 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:10:54.429678 | orchestrator | 2026-02-14 04:10:54.429689 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-14 04:10:54.429700 | orchestrator | Saturday 14 February 2026 04:10:38 +0000 (0:00:11.086) 0:01:32.001 ***** 2026-02-14 04:10:54.429711 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429725 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:10:54.429738 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:10:54.429750 | orchestrator | 2026-02-14 04:10:54.429763 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-14 04:10:54.429775 | orchestrator | Saturday 14 February 2026 04:10:48 +0000 (0:00:09.821) 0:01:41.823 ***** 2026-02-14 04:10:54.429788 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:10:54.429800 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:10:54.429812 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:10:54.429825 | orchestrator | 2026-02-14 04:10:54.429837 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:10:54.429851 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:10:54.429865 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:10:54.429878 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:10:54.429890 | orchestrator | 2026-02-14 04:10:54.429930 | orchestrator | 2026-02-14 04:10:54.429943 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:10:54.429956 | orchestrator | Saturday 14 February 2026 04:10:54 +0000 (0:00:05.367) 0:01:47.190 ***** 2026-02-14 04:10:54.429999 | orchestrator | =============================================================================== 2026-02-14 04:10:54.430075 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.49s 2026-02-14 04:10:54.430089 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.13s 2026-02-14 04:10:54.430102 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.09s 2026-02-14 04:10:54.430113 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.82s 2026-02-14 04:10:54.430124 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.68s 2026-02-14 04:10:54.430134 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.48s 2026-02-14 04:10:54.430145 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.37s 2026-02-14 04:10:54.430156 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.03s 2026-02-14 04:10:54.430167 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.82s 2026-02-14 04:10:54.430177 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.66s 2026-02-14 04:10:54.430193 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.52s 2026-02-14 04:10:54.430213 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.38s 2026-02-14 04:10:54.430231 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.29s 2026-02-14 04:10:54.430252 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.33s 2026-02-14 04:10:54.430287 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.24s 2026-02-14 04:10:54.430320 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.17s 2026-02-14 04:10:54.430331 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.63s 2026-02-14 04:10:54.430342 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.53s 2026-02-14 04:10:54.430353 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.26s 2026-02-14 04:10:54.430368 | orchestrator | barbican : Copying over existing policy file ---------------------------- 0.95s 2026-02-14 04:10:56.727758 | orchestrator | 2026-02-14 04:10:56 | INFO  | Task e4c44335-7371-4aa1-bbfa-8c70d9a51211 (designate) was prepared for execution. 2026-02-14 04:10:56.727849 | orchestrator | 2026-02-14 04:10:56 | INFO  | It takes a moment until task e4c44335-7371-4aa1-bbfa-8c70d9a51211 (designate) has been started and output is visible here. 2026-02-14 04:11:27.951169 | orchestrator | 2026-02-14 04:11:27.951284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:11:27.951302 | orchestrator | 2026-02-14 04:11:27.951314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:11:27.951326 | orchestrator | Saturday 14 February 2026 04:11:00 +0000 (0:00:00.250) 0:00:00.250 ***** 2026-02-14 04:11:27.951337 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:11:27.951349 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:11:27.951360 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:11:27.951371 | orchestrator | 2026-02-14 04:11:27.951383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:11:27.951394 | orchestrator | Saturday 14 February 2026 04:11:01 +0000 (0:00:00.327) 0:00:00.578 ***** 2026-02-14 04:11:27.951405 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-14 04:11:27.951416 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-14 04:11:27.951427 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-14 04:11:27.951437 | orchestrator | 2026-02-14 04:11:27.951448 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-14 04:11:27.951484 | orchestrator | 2026-02-14 04:11:27.951496 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-14 04:11:27.951507 | orchestrator | Saturday 14 February 2026 04:11:01 +0000 (0:00:00.434) 0:00:01.012 ***** 2026-02-14 04:11:27.951518 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:11:27.951530 | orchestrator | 2026-02-14 04:11:27.951541 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-14 04:11:27.951552 | orchestrator | Saturday 14 February 2026 04:11:02 +0000 (0:00:00.554) 0:00:01.567 ***** 2026-02-14 04:11:27.951562 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-14 04:11:27.951573 | orchestrator | 2026-02-14 04:11:27.951583 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-14 04:11:27.951594 | orchestrator | Saturday 14 February 2026 04:11:05 +0000 (0:00:03.293) 0:00:04.861 ***** 2026-02-14 04:11:27.951605 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-14 04:11:27.951616 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-14 04:11:27.951626 | orchestrator | 2026-02-14 04:11:27.951637 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-14 04:11:27.951648 | orchestrator | Saturday 14 February 2026 04:11:11 +0000 (0:00:06.425) 0:00:11.287 ***** 2026-02-14 04:11:27.951659 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:11:27.951670 | orchestrator | 2026-02-14 04:11:27.951681 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-14 04:11:27.951692 | orchestrator | Saturday 14 February 2026 04:11:15 +0000 (0:00:03.181) 0:00:14.468 ***** 2026-02-14 04:11:27.951705 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:11:27.951717 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-14 04:11:27.951730 | orchestrator | 2026-02-14 04:11:27.951742 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-14 04:11:27.951754 | orchestrator | Saturday 14 February 2026 04:11:18 +0000 (0:00:03.896) 0:00:18.364 ***** 2026-02-14 04:11:27.951767 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:11:27.951779 | orchestrator | 2026-02-14 04:11:27.951792 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-14 04:11:27.951804 | orchestrator | Saturday 14 February 2026 04:11:22 +0000 (0:00:03.222) 0:00:21.587 ***** 2026-02-14 04:11:27.951817 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-14 04:11:27.951829 | orchestrator | 2026-02-14 04:11:27.951841 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-14 04:11:27.951854 | orchestrator | Saturday 14 February 2026 04:11:25 +0000 (0:00:03.750) 0:00:25.337 ***** 2026-02-14 04:11:27.951884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:27.951952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:27.951978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:27.951993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:27.952007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:27.952025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:27.952038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:27.952065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:33.978538 | orchestrator | 2026-02-14 04:11:33.978561 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-14 04:11:33.978584 | orchestrator | Saturday 14 February 2026 04:11:28 +0000 (0:00:02.778) 0:00:28.115 ***** 2026-02-14 04:11:33.978604 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:11:33.978624 | orchestrator | 2026-02-14 04:11:33.978644 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-14 04:11:33.978664 | orchestrator | Saturday 14 February 2026 04:11:28 +0000 (0:00:00.143) 0:00:28.259 ***** 2026-02-14 04:11:33.978683 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:11:33.978702 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:11:33.978723 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:11:33.978741 | orchestrator | 2026-02-14 04:11:33.978761 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-14 04:11:33.978792 | orchestrator | Saturday 14 February 2026 04:11:29 +0000 (0:00:00.518) 0:00:28.777 ***** 2026-02-14 04:11:33.978812 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:11:33.978832 | orchestrator | 2026-02-14 04:11:33.978851 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-14 04:11:33.978878 | orchestrator | Saturday 14 February 2026 04:11:29 +0000 (0:00:00.562) 0:00:29.340 ***** 2026-02-14 04:11:33.978931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:33.978970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:35.772359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:35.772477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:35.772722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:36.621164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:36.621269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:36.621311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:36.621325 | orchestrator | 2026-02-14 04:11:36.621339 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-14 04:11:36.621352 | orchestrator | Saturday 14 February 2026 04:11:35 +0000 (0:00:05.818) 0:00:35.159 ***** 2026-02-14 04:11:36.621381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:36.621395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:36.621425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:36.621438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:36.621450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:36.621471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:36.621483 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:11:36.621501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:36.621514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:36.621526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:36.621545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394255 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:11:37.394287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:37.394301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:37.394313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394394 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:11:37.394406 | orchestrator | 2026-02-14 04:11:37.394418 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-14 04:11:37.394430 | orchestrator | Saturday 14 February 2026 04:11:36 +0000 (0:00:00.971) 0:00:36.130 ***** 2026-02-14 04:11:37.394447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:37.394460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:37.394471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.394489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.716737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.716837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.716852 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:11:37.716884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:37.716940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:37.716954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.716966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.717017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.717030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.717042 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:11:37.717059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:11:37.717071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:11:37.717083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.717101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:11:37.717120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:11:41.978299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:11:41.978414 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:11:41.978432 | orchestrator | 2026-02-14 04:11:41.978445 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-14 04:11:41.978458 | orchestrator | Saturday 14 February 2026 04:11:37 +0000 (0:00:00.972) 0:00:37.103 ***** 2026-02-14 04:11:41.978487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:41.978501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:41.978513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:41.978565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:41.978707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315304 | orchestrator | 2026-02-14 04:11:53.315318 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-14 04:11:53.315331 | orchestrator | Saturday 14 February 2026 04:11:43 +0000 (0:00:06.065) 0:00:43.168 ***** 2026-02-14 04:11:53.315350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:53.315364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:53.315383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:11:53.315396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:11:53.315417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.368992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:01.369120 | orchestrator | 2026-02-14 04:12:01.369134 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-14 04:12:01.369147 | orchestrator | Saturday 14 February 2026 04:11:57 +0000 (0:00:14.048) 0:00:57.216 ***** 2026-02-14 04:12:01.369166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-14 04:12:05.564358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-14 04:12:05.564490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-14 04:12:05.564516 | orchestrator | 2026-02-14 04:12:05.564537 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-14 04:12:05.564556 | orchestrator | Saturday 14 February 2026 04:12:01 +0000 (0:00:03.537) 0:01:00.754 ***** 2026-02-14 04:12:05.564575 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-14 04:12:05.564605 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-14 04:12:05.564616 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-14 04:12:05.564650 | orchestrator | 2026-02-14 04:12:05.564661 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-14 04:12:05.564672 | orchestrator | Saturday 14 February 2026 04:12:03 +0000 (0:00:02.425) 0:01:03.179 ***** 2026-02-14 04:12:05.564687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:05.564702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:05.564715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:05.564745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:05.564764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:05.564784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:05.564796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:05.564808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:05.564820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:05.564831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:05.564851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:08.391495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:08.391583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:08.391597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:08.391611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:08.391621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:08.391629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:08.391653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:08.391684 | orchestrator | 2026-02-14 04:12:08.391696 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-14 04:12:08.391708 | orchestrator | Saturday 14 February 2026 04:12:06 +0000 (0:00:02.843) 0:01:06.023 ***** 2026-02-14 04:12:08.391719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:08.391731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:08.391743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:08.391753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:08.391774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.373909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:09.374104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:09.374214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:09.374250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:09.374262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:09.374281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:09.374293 | orchestrator | 2026-02-14 04:12:09.374307 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-14 04:12:09.374332 | orchestrator | Saturday 14 February 2026 04:12:09 +0000 (0:00:02.737) 0:01:08.760 ***** 2026-02-14 04:12:10.298271 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:12:10.298386 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:12:10.298401 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:12:10.298413 | orchestrator | 2026-02-14 04:12:10.298425 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-14 04:12:10.298438 | orchestrator | Saturday 14 February 2026 04:12:09 +0000 (0:00:00.308) 0:01:09.068 ***** 2026-02-14 04:12:10.298453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:10.298469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:12:10.298483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298592 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:12:10.298604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:10.298616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:12:10.298628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:10.298682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:12:13.568884 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:12:13.569003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-14 04:12:13.569024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 04:12:13.569038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 04:12:13.569078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 04:12:13.569106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 04:12:13.569145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:12:13.569158 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:12:13.569170 | orchestrator | 2026-02-14 04:12:13.569201 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-14 04:12:13.569214 | orchestrator | Saturday 14 February 2026 04:12:10 +0000 (0:00:00.736) 0:01:09.805 ***** 2026-02-14 04:12:13.569226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:12:13.569239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:12:13.569251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-14 04:12:13.569271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:13.569378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:12:15.326778 | orchestrator | 2026-02-14 04:12:15.326791 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-14 04:12:15.326813 | orchestrator | Saturday 14 February 2026 04:12:14 +0000 (0:00:04.392) 0:01:14.198 ***** 2026-02-14 04:12:15.326829 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:12:15.326886 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:13:43.986317 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:13:43.986447 | orchestrator | 2026-02-14 04:13:43.986467 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-14 04:13:43.986481 | orchestrator | Saturday 14 February 2026 04:12:15 +0000 (0:00:00.519) 0:01:14.717 ***** 2026-02-14 04:13:43.986492 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-14 04:13:43.986504 | orchestrator | 2026-02-14 04:13:43.986515 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-14 04:13:43.986526 | orchestrator | Saturday 14 February 2026 04:12:17 +0000 (0:00:02.117) 0:01:16.835 ***** 2026-02-14 04:13:43.986537 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-14 04:13:43.986549 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-14 04:13:43.986560 | orchestrator | 2026-02-14 04:13:43.986570 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-14 04:13:43.986581 | orchestrator | Saturday 14 February 2026 04:12:19 +0000 (0:00:02.232) 0:01:19.067 ***** 2026-02-14 04:13:43.986593 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.986604 | orchestrator | 2026-02-14 04:13:43.986620 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-14 04:13:43.986680 | orchestrator | Saturday 14 February 2026 04:12:35 +0000 (0:00:15.917) 0:01:34.984 ***** 2026-02-14 04:13:43.986700 | orchestrator | 2026-02-14 04:13:43.986744 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-14 04:13:43.986763 | orchestrator | Saturday 14 February 2026 04:12:35 +0000 (0:00:00.068) 0:01:35.053 ***** 2026-02-14 04:13:43.986783 | orchestrator | 2026-02-14 04:13:43.986799 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-14 04:13:43.986811 | orchestrator | Saturday 14 February 2026 04:12:35 +0000 (0:00:00.071) 0:01:35.125 ***** 2026-02-14 04:13:43.986822 | orchestrator | 2026-02-14 04:13:43.986834 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-14 04:13:43.986845 | orchestrator | Saturday 14 February 2026 04:12:35 +0000 (0:00:00.071) 0:01:35.196 ***** 2026-02-14 04:13:43.986856 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.986867 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.986878 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.986888 | orchestrator | 2026-02-14 04:13:43.986899 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-14 04:13:43.986910 | orchestrator | Saturday 14 February 2026 04:12:48 +0000 (0:00:12.715) 0:01:47.911 ***** 2026-02-14 04:13:43.986921 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.986932 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.986942 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.986953 | orchestrator | 2026-02-14 04:13:43.986965 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-14 04:13:43.986975 | orchestrator | Saturday 14 February 2026 04:12:59 +0000 (0:00:10.654) 0:01:58.566 ***** 2026-02-14 04:13:43.986987 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.987006 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.987024 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.987042 | orchestrator | 2026-02-14 04:13:43.987060 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-14 04:13:43.987077 | orchestrator | Saturday 14 February 2026 04:13:04 +0000 (0:00:05.534) 0:02:04.101 ***** 2026-02-14 04:13:43.987094 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.987112 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.987130 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.987149 | orchestrator | 2026-02-14 04:13:43.987168 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-14 04:13:43.987187 | orchestrator | Saturday 14 February 2026 04:13:15 +0000 (0:00:10.675) 0:02:14.777 ***** 2026-02-14 04:13:43.987206 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.987225 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.987244 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.987263 | orchestrator | 2026-02-14 04:13:43.987281 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-14 04:13:43.987299 | orchestrator | Saturday 14 February 2026 04:13:25 +0000 (0:00:10.513) 0:02:25.290 ***** 2026-02-14 04:13:43.987318 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.987335 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:13:43.987352 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:13:43.987369 | orchestrator | 2026-02-14 04:13:43.987387 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-14 04:13:43.987406 | orchestrator | Saturday 14 February 2026 04:13:36 +0000 (0:00:10.591) 0:02:35.882 ***** 2026-02-14 04:13:43.987425 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:13:43.987444 | orchestrator | 2026-02-14 04:13:43.987463 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:13:43.987482 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:13:43.987504 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:13:43.987541 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:13:43.987560 | orchestrator | 2026-02-14 04:13:43.987579 | orchestrator | 2026-02-14 04:13:43.987596 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:13:43.987614 | orchestrator | Saturday 14 February 2026 04:13:43 +0000 (0:00:07.138) 0:02:43.020 ***** 2026-02-14 04:13:43.987632 | orchestrator | =============================================================================== 2026-02-14 04:13:43.987670 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.92s 2026-02-14 04:13:43.987688 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.05s 2026-02-14 04:13:43.987783 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.72s 2026-02-14 04:13:43.987808 | orchestrator | designate : Restart designate-producer container ----------------------- 10.68s 2026-02-14 04:13:43.987826 | orchestrator | designate : Restart designate-api container ---------------------------- 10.65s 2026-02-14 04:13:43.987844 | orchestrator | designate : Restart designate-worker container ------------------------- 10.59s 2026-02-14 04:13:43.987862 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.51s 2026-02-14 04:13:43.987880 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.14s 2026-02-14 04:13:43.987898 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.43s 2026-02-14 04:13:43.987918 | orchestrator | designate : Copying over config.json files for services ----------------- 6.07s 2026-02-14 04:13:43.987937 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.82s 2026-02-14 04:13:43.987955 | orchestrator | designate : Restart designate-central container ------------------------- 5.53s 2026-02-14 04:13:43.987973 | orchestrator | designate : Check designate containers ---------------------------------- 4.39s 2026-02-14 04:13:43.987991 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.90s 2026-02-14 04:13:43.988010 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.75s 2026-02-14 04:13:43.988026 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.54s 2026-02-14 04:13:43.988043 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.29s 2026-02-14 04:13:43.988060 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.22s 2026-02-14 04:13:43.988079 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.18s 2026-02-14 04:13:43.988098 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.84s 2026-02-14 04:13:46.348190 | orchestrator | 2026-02-14 04:13:46 | INFO  | Task 32ea02cc-8dc1-4228-a1ad-3c8b2c9056bd (octavia) was prepared for execution. 2026-02-14 04:13:46.348291 | orchestrator | 2026-02-14 04:13:46 | INFO  | It takes a moment until task 32ea02cc-8dc1-4228-a1ad-3c8b2c9056bd (octavia) has been started and output is visible here. 2026-02-14 04:15:52.464422 | orchestrator | 2026-02-14 04:15:52.464546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:15:52.464564 | orchestrator | 2026-02-14 04:15:52.464680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:15:52.464698 | orchestrator | Saturday 14 February 2026 04:13:50 +0000 (0:00:00.250) 0:00:00.250 ***** 2026-02-14 04:15:52.464710 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:15:52.464723 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:15:52.464735 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:15:52.464747 | orchestrator | 2026-02-14 04:15:52.464759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:15:52.464771 | orchestrator | Saturday 14 February 2026 04:13:50 +0000 (0:00:00.317) 0:00:00.567 ***** 2026-02-14 04:15:52.464783 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-14 04:15:52.464822 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-14 04:15:52.464834 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-14 04:15:52.464846 | orchestrator | 2026-02-14 04:15:52.464857 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-14 04:15:52.464869 | orchestrator | 2026-02-14 04:15:52.464880 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:15:52.464892 | orchestrator | Saturday 14 February 2026 04:13:51 +0000 (0:00:00.437) 0:00:01.005 ***** 2026-02-14 04:15:52.464904 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:15:52.464916 | orchestrator | 2026-02-14 04:15:52.464928 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-14 04:15:52.464942 | orchestrator | Saturday 14 February 2026 04:13:51 +0000 (0:00:00.554) 0:00:01.559 ***** 2026-02-14 04:15:52.464955 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-14 04:15:52.464968 | orchestrator | 2026-02-14 04:15:52.464980 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-14 04:15:52.464993 | orchestrator | Saturday 14 February 2026 04:13:55 +0000 (0:00:03.371) 0:00:04.930 ***** 2026-02-14 04:15:52.465006 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-14 04:15:52.465019 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-14 04:15:52.465032 | orchestrator | 2026-02-14 04:15:52.465045 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-14 04:15:52.465057 | orchestrator | Saturday 14 February 2026 04:14:01 +0000 (0:00:06.673) 0:00:11.603 ***** 2026-02-14 04:15:52.465070 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:15:52.465083 | orchestrator | 2026-02-14 04:15:52.465095 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-14 04:15:52.465108 | orchestrator | Saturday 14 February 2026 04:14:05 +0000 (0:00:03.210) 0:00:14.813 ***** 2026-02-14 04:15:52.465121 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:15:52.465133 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-14 04:15:52.465146 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-14 04:15:52.465159 | orchestrator | 2026-02-14 04:15:52.465185 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-14 04:15:52.465197 | orchestrator | Saturday 14 February 2026 04:14:13 +0000 (0:00:08.270) 0:00:23.084 ***** 2026-02-14 04:15:52.465208 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:15:52.465219 | orchestrator | 2026-02-14 04:15:52.465231 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-14 04:15:52.465242 | orchestrator | Saturday 14 February 2026 04:14:16 +0000 (0:00:03.223) 0:00:26.308 ***** 2026-02-14 04:15:52.465253 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-14 04:15:52.465264 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-14 04:15:52.465275 | orchestrator | 2026-02-14 04:15:52.465286 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-14 04:15:52.465298 | orchestrator | Saturday 14 February 2026 04:14:23 +0000 (0:00:07.364) 0:00:33.672 ***** 2026-02-14 04:15:52.465309 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-14 04:15:52.465320 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-14 04:15:52.465331 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-14 04:15:52.465343 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-14 04:15:52.465354 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-14 04:15:52.465365 | orchestrator | 2026-02-14 04:15:52.465376 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:15:52.465397 | orchestrator | Saturday 14 February 2026 04:14:39 +0000 (0:00:15.880) 0:00:49.552 ***** 2026-02-14 04:15:52.465408 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:15:52.465420 | orchestrator | 2026-02-14 04:15:52.465431 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-14 04:15:52.465442 | orchestrator | Saturday 14 February 2026 04:14:40 +0000 (0:00:00.785) 0:00:50.337 ***** 2026-02-14 04:15:52.465453 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.465465 | orchestrator | 2026-02-14 04:15:52.465476 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-14 04:15:52.465488 | orchestrator | Saturday 14 February 2026 04:14:45 +0000 (0:00:05.066) 0:00:55.404 ***** 2026-02-14 04:15:52.465499 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.465511 | orchestrator | 2026-02-14 04:15:52.465522 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-14 04:15:52.465555 | orchestrator | Saturday 14 February 2026 04:14:49 +0000 (0:00:03.946) 0:00:59.350 ***** 2026-02-14 04:15:52.465567 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:15:52.465607 | orchestrator | 2026-02-14 04:15:52.465627 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-14 04:15:52.465646 | orchestrator | Saturday 14 February 2026 04:14:52 +0000 (0:00:03.258) 0:01:02.609 ***** 2026-02-14 04:15:52.465664 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-14 04:15:52.465676 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-14 04:15:52.465686 | orchestrator | 2026-02-14 04:15:52.465697 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-14 04:15:52.465708 | orchestrator | Saturday 14 February 2026 04:15:02 +0000 (0:00:09.798) 0:01:12.408 ***** 2026-02-14 04:15:52.465719 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-14 04:15:52.465730 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-14 04:15:52.465742 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-14 04:15:52.465755 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-14 04:15:52.465770 | orchestrator | 2026-02-14 04:15:52.465781 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-14 04:15:52.465793 | orchestrator | Saturday 14 February 2026 04:15:19 +0000 (0:00:17.152) 0:01:29.561 ***** 2026-02-14 04:15:52.465803 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.465814 | orchestrator | 2026-02-14 04:15:52.465825 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-14 04:15:52.465836 | orchestrator | Saturday 14 February 2026 04:15:24 +0000 (0:00:04.501) 0:01:34.062 ***** 2026-02-14 04:15:52.465846 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.465857 | orchestrator | 2026-02-14 04:15:52.465868 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-14 04:15:52.465878 | orchestrator | Saturday 14 February 2026 04:15:29 +0000 (0:00:05.531) 0:01:39.594 ***** 2026-02-14 04:15:52.465889 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:15:52.465900 | orchestrator | 2026-02-14 04:15:52.465911 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-14 04:15:52.465922 | orchestrator | Saturday 14 February 2026 04:15:30 +0000 (0:00:00.218) 0:01:39.813 ***** 2026-02-14 04:15:52.465933 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:15:52.465944 | orchestrator | 2026-02-14 04:15:52.465955 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:15:52.465965 | orchestrator | Saturday 14 February 2026 04:15:34 +0000 (0:00:04.221) 0:01:44.034 ***** 2026-02-14 04:15:52.465984 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:15:52.465995 | orchestrator | 2026-02-14 04:15:52.466006 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-14 04:15:52.466091 | orchestrator | Saturday 14 February 2026 04:15:35 +0000 (0:00:01.115) 0:01:45.149 ***** 2026-02-14 04:15:52.466107 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466118 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466129 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466140 | orchestrator | 2026-02-14 04:15:52.466151 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-14 04:15:52.466161 | orchestrator | Saturday 14 February 2026 04:15:40 +0000 (0:00:05.153) 0:01:50.302 ***** 2026-02-14 04:15:52.466172 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466183 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466194 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466205 | orchestrator | 2026-02-14 04:15:52.466216 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-14 04:15:52.466227 | orchestrator | Saturday 14 February 2026 04:15:45 +0000 (0:00:04.667) 0:01:54.970 ***** 2026-02-14 04:15:52.466238 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466248 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466259 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466270 | orchestrator | 2026-02-14 04:15:52.466281 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-14 04:15:52.466292 | orchestrator | Saturday 14 February 2026 04:15:46 +0000 (0:00:01.038) 0:01:56.008 ***** 2026-02-14 04:15:52.466302 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:15:52.466313 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:15:52.466324 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:15:52.466335 | orchestrator | 2026-02-14 04:15:52.466346 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-14 04:15:52.466356 | orchestrator | Saturday 14 February 2026 04:15:47 +0000 (0:00:01.725) 0:01:57.734 ***** 2026-02-14 04:15:52.466367 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466378 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466389 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466400 | orchestrator | 2026-02-14 04:15:52.466411 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-14 04:15:52.466421 | orchestrator | Saturday 14 February 2026 04:15:49 +0000 (0:00:01.191) 0:01:58.925 ***** 2026-02-14 04:15:52.466432 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466443 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466454 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466465 | orchestrator | 2026-02-14 04:15:52.466476 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-14 04:15:52.466487 | orchestrator | Saturday 14 February 2026 04:15:50 +0000 (0:00:01.151) 0:02:00.077 ***** 2026-02-14 04:15:52.466497 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:15:52.466508 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:15:52.466519 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:15:52.466529 | orchestrator | 2026-02-14 04:15:52.466550 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-14 04:16:18.759008 | orchestrator | Saturday 14 February 2026 04:15:52 +0000 (0:00:02.183) 0:02:02.261 ***** 2026-02-14 04:16:18.759124 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:16:18.759139 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:16:18.759149 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:16:18.759158 | orchestrator | 2026-02-14 04:16:18.759168 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-14 04:16:18.759178 | orchestrator | Saturday 14 February 2026 04:15:53 +0000 (0:00:01.414) 0:02:03.676 ***** 2026-02-14 04:16:18.759187 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759198 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:16:18.759229 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:16:18.759238 | orchestrator | 2026-02-14 04:16:18.759247 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-14 04:16:18.759256 | orchestrator | Saturday 14 February 2026 04:15:54 +0000 (0:00:00.674) 0:02:04.350 ***** 2026-02-14 04:16:18.759265 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759273 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:16:18.759282 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:16:18.759290 | orchestrator | 2026-02-14 04:16:18.759299 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:16:18.759308 | orchestrator | Saturday 14 February 2026 04:15:58 +0000 (0:00:03.916) 0:02:08.267 ***** 2026-02-14 04:16:18.759317 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:16:18.759326 | orchestrator | 2026-02-14 04:16:18.759335 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-14 04:16:18.759344 | orchestrator | Saturday 14 February 2026 04:15:58 +0000 (0:00:00.539) 0:02:08.806 ***** 2026-02-14 04:16:18.759352 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759361 | orchestrator | 2026-02-14 04:16:18.759370 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-14 04:16:18.759378 | orchestrator | Saturday 14 February 2026 04:16:02 +0000 (0:00:03.884) 0:02:12.691 ***** 2026-02-14 04:16:18.759387 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759395 | orchestrator | 2026-02-14 04:16:18.759404 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-14 04:16:18.759413 | orchestrator | Saturday 14 February 2026 04:16:06 +0000 (0:00:03.153) 0:02:15.844 ***** 2026-02-14 04:16:18.759422 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-14 04:16:18.759431 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-14 04:16:18.759440 | orchestrator | 2026-02-14 04:16:18.759449 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-14 04:16:18.759457 | orchestrator | Saturday 14 February 2026 04:16:12 +0000 (0:00:06.727) 0:02:22.571 ***** 2026-02-14 04:16:18.759466 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759474 | orchestrator | 2026-02-14 04:16:18.759483 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-14 04:16:18.759492 | orchestrator | Saturday 14 February 2026 04:16:16 +0000 (0:00:03.558) 0:02:26.130 ***** 2026-02-14 04:16:18.759500 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:16:18.759509 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:16:18.759517 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:16:18.759526 | orchestrator | 2026-02-14 04:16:18.759573 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-14 04:16:18.759586 | orchestrator | Saturday 14 February 2026 04:16:16 +0000 (0:00:00.475) 0:02:26.606 ***** 2026-02-14 04:16:18.759600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:18.759631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:18.759651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:18.759663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:18.759674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:18.759689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:18.759701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:18.759719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:18.759737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:20.180271 | orchestrator | 2026-02-14 04:16:20.180285 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-14 04:16:20.180298 | orchestrator | Saturday 14 February 2026 04:16:19 +0000 (0:00:02.390) 0:02:28.996 ***** 2026-02-14 04:16:20.180310 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:16:20.180323 | orchestrator | 2026-02-14 04:16:20.180335 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-14 04:16:20.180347 | orchestrator | Saturday 14 February 2026 04:16:19 +0000 (0:00:00.140) 0:02:29.137 ***** 2026-02-14 04:16:20.180358 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:16:20.180385 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:16:20.180397 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:16:20.180409 | orchestrator | 2026-02-14 04:16:20.180421 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-14 04:16:20.180433 | orchestrator | Saturday 14 February 2026 04:16:19 +0000 (0:00:00.306) 0:02:29.443 ***** 2026-02-14 04:16:20.180446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:20.180461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:20.180480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:20.180500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:20.180512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:20.180524 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:16:20.180544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:25.004950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:25.005049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:25.005080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:25.005111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:25.005121 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:16:25.005133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:25.005144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:25.005169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:25.005179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:25.005193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:25.005211 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:16:25.005220 | orchestrator | 2026-02-14 04:16:25.005230 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:16:25.005241 | orchestrator | Saturday 14 February 2026 04:16:20 +0000 (0:00:00.633) 0:02:30.076 ***** 2026-02-14 04:16:25.005250 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:16:25.005259 | orchestrator | 2026-02-14 04:16:25.005268 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-14 04:16:25.005277 | orchestrator | Saturday 14 February 2026 04:16:20 +0000 (0:00:00.714) 0:02:30.791 ***** 2026-02-14 04:16:25.005287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:25.005297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:25.005339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:26.501495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:26.501766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:26.501788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:26.501807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:26.501984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:26.502004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:26.502091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:26.502106 | orchestrator | 2026-02-14 04:16:26.502120 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-14 04:16:26.502135 | orchestrator | Saturday 14 February 2026 04:16:25 +0000 (0:00:04.940) 0:02:35.731 ***** 2026-02-14 04:16:26.502161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:26.608248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:26.608350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:26.608367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:26.608380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:26.608393 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:16:26.608407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:26.608420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:26.608478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:26.608492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:26.608504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:26.608516 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:16:26.608527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:26.608564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:26.608577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:26.608605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:27.398303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:27.398380 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:16:27.398388 | orchestrator | 2026-02-14 04:16:27.398396 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-14 04:16:27.398404 | orchestrator | Saturday 14 February 2026 04:16:26 +0000 (0:00:00.681) 0:02:36.413 ***** 2026-02-14 04:16:27.398412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:27.398422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:27.398429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:27.398452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:27.398477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:27.398487 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:16:27.398494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:27.398502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:27.398509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:27.398516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:27.398528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:27.398534 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:16:27.398565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 04:16:31.958224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 04:16:31.958324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 04:16:31.958341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 04:16:31.958358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 04:16:31.958396 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:16:31.958411 | orchestrator | 2026-02-14 04:16:31.958425 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-14 04:16:31.958440 | orchestrator | Saturday 14 February 2026 04:16:27 +0000 (0:00:01.261) 0:02:37.675 ***** 2026-02-14 04:16:31.958454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:31.958507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:31.958523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:31.958557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:31.958582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:31.958596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:16:31.958611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:31.958639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:16:47.413902 | orchestrator | 2026-02-14 04:16:47.413909 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-14 04:16:47.413917 | orchestrator | Saturday 14 February 2026 04:16:32 +0000 (0:00:05.120) 0:02:42.795 ***** 2026-02-14 04:16:47.413923 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-14 04:16:47.413930 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-14 04:16:47.413935 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-14 04:16:47.413940 | orchestrator | 2026-02-14 04:16:47.413945 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-14 04:16:47.413951 | orchestrator | Saturday 14 February 2026 04:16:34 +0000 (0:00:01.560) 0:02:44.355 ***** 2026-02-14 04:16:47.413957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:47.413970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:47.413975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:16:47.413991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:02.623138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:02.623283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:02.623324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:17:02.623466 | orchestrator | 2026-02-14 04:17:02.623478 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-14 04:17:02.623491 | orchestrator | Saturday 14 February 2026 04:16:50 +0000 (0:00:16.047) 0:03:00.403 ***** 2026-02-14 04:17:02.623501 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:17:02.623538 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:17:02.623548 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:17:02.623558 | orchestrator | 2026-02-14 04:17:02.623568 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-14 04:17:02.623577 | orchestrator | Saturday 14 February 2026 04:16:52 +0000 (0:00:01.695) 0:03:02.099 ***** 2026-02-14 04:17:02.623588 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-14 04:17:02.623598 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-14 04:17:02.623607 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-14 04:17:02.623617 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-14 04:17:02.623627 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-14 04:17:02.623637 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-14 04:17:02.623649 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-14 04:17:02.623661 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-14 04:17:02.623672 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-14 04:17:02.623689 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-14 04:17:02.623701 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-14 04:17:02.623712 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-14 04:17:02.623733 | orchestrator | 2026-02-14 04:17:02.623744 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-14 04:17:02.623762 | orchestrator | Saturday 14 February 2026 04:16:57 +0000 (0:00:05.045) 0:03:07.144 ***** 2026-02-14 04:17:02.623773 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-14 04:17:02.623786 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-14 04:17:02.623804 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-14 04:17:10.919403 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919563 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919580 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919591 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919603 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919615 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919627 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919639 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919651 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919663 | orchestrator | 2026-02-14 04:17:10.919676 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-14 04:17:10.919689 | orchestrator | Saturday 14 February 2026 04:17:02 +0000 (0:00:05.274) 0:03:12.418 ***** 2026-02-14 04:17:10.919701 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-14 04:17:10.919712 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-14 04:17:10.919724 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-14 04:17:10.919735 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919747 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919759 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-14 04:17:10.919770 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919782 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919793 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-14 04:17:10.919805 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919816 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919828 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-14 04:17:10.919839 | orchestrator | 2026-02-14 04:17:10.919851 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-14 04:17:10.919863 | orchestrator | Saturday 14 February 2026 04:17:07 +0000 (0:00:05.134) 0:03:17.552 ***** 2026-02-14 04:17:10.919879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:17:10.919913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:17:10.919977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 04:17:10.919993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:10.920008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:10.920023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-14 04:17:10.920037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:10.920051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:10.920083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-14 04:17:10.920105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-14 04:18:43.356397 | orchestrator | 2026-02-14 04:18:43.356477 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-14 04:18:43.356494 | orchestrator | Saturday 14 February 2026 04:17:11 +0000 (0:00:03.828) 0:03:21.381 ***** 2026-02-14 04:18:43.356506 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:18:43.356518 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:18:43.356529 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:18:43.356539 | orchestrator | 2026-02-14 04:18:43.356551 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-14 04:18:43.356562 | orchestrator | Saturday 14 February 2026 04:17:12 +0000 (0:00:00.514) 0:03:21.896 ***** 2026-02-14 04:18:43.356572 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356583 | orchestrator | 2026-02-14 04:18:43.356594 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-14 04:18:43.356605 | orchestrator | Saturday 14 February 2026 04:17:14 +0000 (0:00:02.053) 0:03:23.950 ***** 2026-02-14 04:18:43.356615 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356626 | orchestrator | 2026-02-14 04:18:43.356637 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-14 04:18:43.356647 | orchestrator | Saturday 14 February 2026 04:17:16 +0000 (0:00:02.073) 0:03:26.023 ***** 2026-02-14 04:18:43.356660 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356673 | orchestrator | 2026-02-14 04:18:43.356686 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-14 04:18:43.356699 | orchestrator | Saturday 14 February 2026 04:17:18 +0000 (0:00:02.204) 0:03:28.228 ***** 2026-02-14 04:18:43.356730 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356743 | orchestrator | 2026-02-14 04:18:43.356755 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-14 04:18:43.356767 | orchestrator | Saturday 14 February 2026 04:17:20 +0000 (0:00:02.373) 0:03:30.602 ***** 2026-02-14 04:18:43.356779 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356791 | orchestrator | 2026-02-14 04:18:43.356804 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-14 04:18:43.356816 | orchestrator | Saturday 14 February 2026 04:17:43 +0000 (0:00:23.138) 0:03:53.740 ***** 2026-02-14 04:18:43.356828 | orchestrator | 2026-02-14 04:18:43.356840 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-14 04:18:43.356852 | orchestrator | Saturday 14 February 2026 04:17:44 +0000 (0:00:00.074) 0:03:53.815 ***** 2026-02-14 04:18:43.356865 | orchestrator | 2026-02-14 04:18:43.356876 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-14 04:18:43.356889 | orchestrator | Saturday 14 February 2026 04:17:44 +0000 (0:00:00.072) 0:03:53.887 ***** 2026-02-14 04:18:43.356900 | orchestrator | 2026-02-14 04:18:43.356913 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-14 04:18:43.356926 | orchestrator | Saturday 14 February 2026 04:17:44 +0000 (0:00:00.072) 0:03:53.960 ***** 2026-02-14 04:18:43.356948 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.356961 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:18:43.356973 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:18:43.356985 | orchestrator | 2026-02-14 04:18:43.356997 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-14 04:18:43.357009 | orchestrator | Saturday 14 February 2026 04:18:00 +0000 (0:00:16.175) 0:04:10.136 ***** 2026-02-14 04:18:43.357021 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:18:43.357032 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.357043 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:18:43.357054 | orchestrator | 2026-02-14 04:18:43.357064 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-14 04:18:43.357075 | orchestrator | Saturday 14 February 2026 04:18:11 +0000 (0:00:11.332) 0:04:21.469 ***** 2026-02-14 04:18:43.357086 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.357097 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:18:43.357108 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:18:43.357119 | orchestrator | 2026-02-14 04:18:43.357130 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-14 04:18:43.357141 | orchestrator | Saturday 14 February 2026 04:18:22 +0000 (0:00:10.524) 0:04:31.993 ***** 2026-02-14 04:18:43.357152 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.357163 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:18:43.357173 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:18:43.357184 | orchestrator | 2026-02-14 04:18:43.357195 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-14 04:18:43.357206 | orchestrator | Saturday 14 February 2026 04:18:32 +0000 (0:00:10.316) 0:04:42.309 ***** 2026-02-14 04:18:43.357217 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:18:43.357228 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:18:43.357238 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:18:43.357249 | orchestrator | 2026-02-14 04:18:43.357260 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:18:43.357272 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:18:43.357284 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:18:43.357295 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:18:43.357306 | orchestrator | 2026-02-14 04:18:43.357317 | orchestrator | 2026-02-14 04:18:43.357328 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:18:43.357339 | orchestrator | Saturday 14 February 2026 04:18:43 +0000 (0:00:10.827) 0:04:53.136 ***** 2026-02-14 04:18:43.357350 | orchestrator | =============================================================================== 2026-02-14 04:18:43.357361 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.14s 2026-02-14 04:18:43.357371 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.15s 2026-02-14 04:18:43.357388 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.18s 2026-02-14 04:18:43.357399 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.05s 2026-02-14 04:18:43.357410 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.88s 2026-02-14 04:18:43.357421 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.33s 2026-02-14 04:18:43.357456 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.83s 2026-02-14 04:18:43.357468 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.52s 2026-02-14 04:18:43.357478 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.32s 2026-02-14 04:18:43.357489 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.80s 2026-02-14 04:18:43.357506 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.27s 2026-02-14 04:18:43.357517 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.36s 2026-02-14 04:18:43.357528 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.73s 2026-02-14 04:18:43.357539 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.67s 2026-02-14 04:18:43.357557 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.53s 2026-02-14 04:18:43.689426 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.27s 2026-02-14 04:18:43.689597 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.15s 2026-02-14 04:18:43.689613 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.13s 2026-02-14 04:18:43.689625 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.12s 2026-02-14 04:18:43.689636 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.07s 2026-02-14 04:18:46.013607 | orchestrator | 2026-02-14 04:18:46 | INFO  | Task 463c7082-f20d-4c54-9362-0658d8cfc196 (ceilometer) was prepared for execution. 2026-02-14 04:18:46.013706 | orchestrator | 2026-02-14 04:18:46 | INFO  | It takes a moment until task 463c7082-f20d-4c54-9362-0658d8cfc196 (ceilometer) has been started and output is visible here. 2026-02-14 04:19:09.684568 | orchestrator | 2026-02-14 04:19:09.684651 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:19:09.684660 | orchestrator | 2026-02-14 04:19:09.684666 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:19:09.684674 | orchestrator | Saturday 14 February 2026 04:18:50 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-14 04:19:09.684680 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:19:09.684687 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:19:09.684693 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:19:09.684699 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:19:09.684705 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:19:09.684711 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:19:09.684717 | orchestrator | 2026-02-14 04:19:09.684722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:19:09.684728 | orchestrator | Saturday 14 February 2026 04:18:50 +0000 (0:00:00.716) 0:00:00.979 ***** 2026-02-14 04:19:09.684735 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684742 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684748 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684753 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684759 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684765 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-14 04:19:09.684771 | orchestrator | 2026-02-14 04:19:09.684777 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-14 04:19:09.684782 | orchestrator | 2026-02-14 04:19:09.684788 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-14 04:19:09.684793 | orchestrator | Saturday 14 February 2026 04:18:51 +0000 (0:00:00.599) 0:00:01.578 ***** 2026-02-14 04:19:09.684800 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:19:09.684807 | orchestrator | 2026-02-14 04:19:09.684813 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-14 04:19:09.684818 | orchestrator | Saturday 14 February 2026 04:18:52 +0000 (0:00:01.225) 0:00:02.804 ***** 2026-02-14 04:19:09.684824 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:09.684829 | orchestrator | 2026-02-14 04:19:09.684835 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-14 04:19:09.684861 | orchestrator | Saturday 14 February 2026 04:18:52 +0000 (0:00:00.126) 0:00:02.931 ***** 2026-02-14 04:19:09.684868 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:09.684873 | orchestrator | 2026-02-14 04:19:09.684879 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-14 04:19:09.684885 | orchestrator | Saturday 14 February 2026 04:18:53 +0000 (0:00:00.136) 0:00:03.067 ***** 2026-02-14 04:19:09.684891 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:19:09.684896 | orchestrator | 2026-02-14 04:19:09.684902 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-14 04:19:09.684908 | orchestrator | Saturday 14 February 2026 04:18:56 +0000 (0:00:03.919) 0:00:06.987 ***** 2026-02-14 04:19:09.684913 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:19:09.684919 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-14 04:19:09.684924 | orchestrator | 2026-02-14 04:19:09.684942 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-14 04:19:09.684948 | orchestrator | Saturday 14 February 2026 04:19:00 +0000 (0:00:03.860) 0:00:10.847 ***** 2026-02-14 04:19:09.684954 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:19:09.684959 | orchestrator | 2026-02-14 04:19:09.684964 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-14 04:19:09.684969 | orchestrator | Saturday 14 February 2026 04:19:04 +0000 (0:00:03.157) 0:00:14.005 ***** 2026-02-14 04:19:09.684975 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-14 04:19:09.684980 | orchestrator | 2026-02-14 04:19:09.684986 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-14 04:19:09.684991 | orchestrator | Saturday 14 February 2026 04:19:08 +0000 (0:00:04.062) 0:00:18.067 ***** 2026-02-14 04:19:09.684997 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:09.685002 | orchestrator | 2026-02-14 04:19:09.685008 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-14 04:19:09.685014 | orchestrator | Saturday 14 February 2026 04:19:08 +0000 (0:00:00.133) 0:00:18.200 ***** 2026-02-14 04:19:09.685023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:09.685095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:09.685106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:14.283290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:14.283393 | orchestrator | 2026-02-14 04:19:14.283403 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-14 04:19:14.283455 | orchestrator | Saturday 14 February 2026 04:19:09 +0000 (0:00:01.459) 0:00:19.660 ***** 2026-02-14 04:19:14.283462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:14.283470 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:19:14.283476 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:19:14.283482 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:19:14.283488 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:19:14.283494 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:19:14.283500 | orchestrator | 2026-02-14 04:19:14.283506 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-14 04:19:14.283512 | orchestrator | Saturday 14 February 2026 04:19:11 +0000 (0:00:01.556) 0:00:21.217 ***** 2026-02-14 04:19:14.283518 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:19:14.283526 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:19:14.283531 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:19:14.283537 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:19:14.283543 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:19:14.283549 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:19:14.283554 | orchestrator | 2026-02-14 04:19:14.283561 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-14 04:19:14.283567 | orchestrator | Saturday 14 February 2026 04:19:11 +0000 (0:00:00.600) 0:00:21.817 ***** 2026-02-14 04:19:14.283573 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:14.283582 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:14.283593 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:14.283603 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:14.283613 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:14.283622 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:14.283631 | orchestrator | 2026-02-14 04:19:14.283640 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-14 04:19:14.283650 | orchestrator | Saturday 14 February 2026 04:19:12 +0000 (0:00:00.790) 0:00:22.608 ***** 2026-02-14 04:19:14.283660 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:19:14.283671 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:19:14.283681 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:19:14.283690 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:19:14.283701 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:19:14.283707 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:19:14.283713 | orchestrator | 2026-02-14 04:19:14.283751 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-14 04:19:14.283758 | orchestrator | Saturday 14 February 2026 04:19:13 +0000 (0:00:00.593) 0:00:23.201 ***** 2026-02-14 04:19:14.283765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:14.283774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:14.283787 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:14.283810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:14.283816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:14.283823 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:14.283829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:14.283840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:14.283848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:14.283856 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:14.283862 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:14.283869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:14.283881 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:14.283893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.858797 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:18.858903 | orchestrator | 2026-02-14 04:19:18.858922 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-14 04:19:18.858936 | orchestrator | Saturday 14 February 2026 04:19:14 +0000 (0:00:01.064) 0:00:24.265 ***** 2026-02-14 04:19:18.858950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.858965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:18.858977 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:18.859004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.859017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:18.859049 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:18.859061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.859073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:18.859085 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:18.859113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.859126 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:18.859138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.859149 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:18.859165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:18.859185 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:18.859197 | orchestrator | 2026-02-14 04:19:18.859209 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-14 04:19:18.859222 | orchestrator | Saturday 14 February 2026 04:19:15 +0000 (0:00:00.855) 0:00:25.120 ***** 2026-02-14 04:19:18.859233 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:18.859244 | orchestrator | 2026-02-14 04:19:18.859255 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-14 04:19:18.859266 | orchestrator | Saturday 14 February 2026 04:19:15 +0000 (0:00:00.686) 0:00:25.807 ***** 2026-02-14 04:19:18.859277 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:19:18.859289 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:19:18.859299 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:19:18.859310 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:19:18.859321 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:19:18.859331 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:19:18.859342 | orchestrator | 2026-02-14 04:19:18.859352 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-14 04:19:18.859363 | orchestrator | Saturday 14 February 2026 04:19:16 +0000 (0:00:00.767) 0:00:26.574 ***** 2026-02-14 04:19:18.859374 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:19:18.859385 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:19:18.859395 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:19:18.859433 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:19:18.859444 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:19:18.859455 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:19:18.859465 | orchestrator | 2026-02-14 04:19:18.859477 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-14 04:19:18.859487 | orchestrator | Saturday 14 February 2026 04:19:17 +0000 (0:00:00.906) 0:00:27.481 ***** 2026-02-14 04:19:18.859498 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:18.859509 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:18.859520 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:18.859531 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:18.859542 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:18.859553 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:18.859563 | orchestrator | 2026-02-14 04:19:18.859574 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-14 04:19:18.859586 | orchestrator | Saturday 14 February 2026 04:19:18 +0000 (0:00:00.760) 0:00:28.242 ***** 2026-02-14 04:19:18.859597 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:18.859607 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:18.859618 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:18.859629 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:18.859640 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:18.859651 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:18.859662 | orchestrator | 2026-02-14 04:19:23.881579 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-14 04:19:23.881701 | orchestrator | Saturday 14 February 2026 04:19:18 +0000 (0:00:00.603) 0:00:28.845 ***** 2026-02-14 04:19:23.881723 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:23.881739 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:19:23.881753 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:19:23.881769 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:19:23.881797 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:19:23.881814 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:19:23.881828 | orchestrator | 2026-02-14 04:19:23.881844 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-14 04:19:23.881860 | orchestrator | Saturday 14 February 2026 04:19:20 +0000 (0:00:01.497) 0:00:30.343 ***** 2026-02-14 04:19:23.881906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.881944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:23.881961 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:23.881976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.881991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:23.882007 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:23.882120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.882153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:23.882175 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:23.882186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.882197 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:23.882213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.882224 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:23.882234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:23.882245 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:23.882255 | orchestrator | 2026-02-14 04:19:23.882265 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-14 04:19:23.882276 | orchestrator | Saturday 14 February 2026 04:19:21 +0000 (0:00:00.796) 0:00:31.139 ***** 2026-02-14 04:19:23.882285 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:23.882295 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:23.882305 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:23.882315 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:23.882325 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:23.882334 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:23.882344 | orchestrator | 2026-02-14 04:19:23.882354 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-14 04:19:23.882364 | orchestrator | Saturday 14 February 2026 04:19:21 +0000 (0:00:00.810) 0:00:31.949 ***** 2026-02-14 04:19:23.882374 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:19:23.882382 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:23.882391 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:19:23.882424 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:19:23.882440 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:19:23.882452 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:19:23.882461 | orchestrator | 2026-02-14 04:19:23.882470 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-14 04:19:23.882486 | orchestrator | Saturday 14 February 2026 04:19:23 +0000 (0:00:01.406) 0:00:33.356 ***** 2026-02-14 04:19:23.882504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.570995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:29.571112 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:29.571131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.571162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:29.571175 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:29.571187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.571199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:29.571231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.571244 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:29.571255 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:29.571283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.571295 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:29.571306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:29.571318 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:29.571329 | orchestrator | 2026-02-14 04:19:29.571346 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-14 04:19:29.571358 | orchestrator | Saturday 14 February 2026 04:19:24 +0000 (0:00:01.157) 0:00:34.514 ***** 2026-02-14 04:19:29.571369 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:29.571380 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:29.571391 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:29.571481 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:29.571493 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:29.571503 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:29.571514 | orchestrator | 2026-02-14 04:19:29.571525 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-14 04:19:29.571538 | orchestrator | Saturday 14 February 2026 04:19:25 +0000 (0:00:00.811) 0:00:35.325 ***** 2026-02-14 04:19:29.571550 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:29.571562 | orchestrator | 2026-02-14 04:19:29.571575 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-14 04:19:29.571588 | orchestrator | Saturday 14 February 2026 04:19:25 +0000 (0:00:00.132) 0:00:35.457 ***** 2026-02-14 04:19:29.571600 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:29.571612 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:29.571624 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:29.571637 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:29.571659 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:29.571671 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:29.571683 | orchestrator | 2026-02-14 04:19:29.571696 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-14 04:19:29.571708 | orchestrator | Saturday 14 February 2026 04:19:26 +0000 (0:00:00.608) 0:00:36.066 ***** 2026-02-14 04:19:29.571722 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:19:29.571735 | orchestrator | 2026-02-14 04:19:29.571748 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-14 04:19:29.571760 | orchestrator | Saturday 14 February 2026 04:19:27 +0000 (0:00:01.271) 0:00:37.337 ***** 2026-02-14 04:19:29.571773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:29.571796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:30.092367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:30.092572 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:30.092601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:30.092625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:30.092632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:30.092640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:30.092662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:30.092668 | orchestrator | 2026-02-14 04:19:30.092676 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-14 04:19:30.092683 | orchestrator | Saturday 14 February 2026 04:19:29 +0000 (0:00:02.216) 0:00:39.554 ***** 2026-02-14 04:19:30.092690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:30.092700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:30.092712 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:30.092720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:30.092726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:30.092733 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:30.092739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:30.092750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:31.917126 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:31.917263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917292 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:31.917346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917359 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:31.917370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917382 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:31.917472 | orchestrator | 2026-02-14 04:19:31.917495 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-14 04:19:31.917518 | orchestrator | Saturday 14 February 2026 04:19:30 +0000 (0:00:00.846) 0:00:40.401 ***** 2026-02-14 04:19:31.917540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:31.917604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:31.917646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:31.917669 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:31.917681 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:31.917692 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:31.917704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917715 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:31.917727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:31.917738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:31.917759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:38.607724 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:38.607857 | orchestrator | 2026-02-14 04:19:38.607876 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-14 04:19:38.607890 | orchestrator | Saturday 14 February 2026 04:19:31 +0000 (0:00:01.484) 0:00:41.886 ***** 2026-02-14 04:19:38.607921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.607937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.607949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.607962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.607976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.608027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.608046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:38.608059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:38.608071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:38.608086 | orchestrator | 2026-02-14 04:19:38.608103 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-14 04:19:38.608121 | orchestrator | Saturday 14 February 2026 04:19:34 +0000 (0:00:02.477) 0:00:44.363 ***** 2026-02-14 04:19:38.608139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.608159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:38.608203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.468938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.469077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.469105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.469119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:47.469132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:47.469171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:47.469193 | orchestrator | 2026-02-14 04:19:47.469215 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-14 04:19:47.469259 | orchestrator | Saturday 14 February 2026 04:19:38 +0000 (0:00:04.229) 0:00:48.592 ***** 2026-02-14 04:19:47.469281 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:47.469300 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:19:47.469318 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:19:47.469337 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:19:47.469355 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:19:47.469375 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:19:47.469449 | orchestrator | 2026-02-14 04:19:47.469481 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-14 04:19:47.469502 | orchestrator | Saturday 14 February 2026 04:19:39 +0000 (0:00:01.383) 0:00:49.975 ***** 2026-02-14 04:19:47.469521 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:47.469542 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:47.469561 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:47.469579 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:47.469598 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:47.469618 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:47.469637 | orchestrator | 2026-02-14 04:19:47.469657 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-14 04:19:47.469678 | orchestrator | Saturday 14 February 2026 04:19:40 +0000 (0:00:00.529) 0:00:50.505 ***** 2026-02-14 04:19:47.469697 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:47.469718 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:47.469737 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:47.469757 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:19:47.469775 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:19:47.469794 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:19:47.469812 | orchestrator | 2026-02-14 04:19:47.469831 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-14 04:19:47.469850 | orchestrator | Saturday 14 February 2026 04:19:42 +0000 (0:00:01.526) 0:00:52.032 ***** 2026-02-14 04:19:47.469868 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:47.469887 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:47.469906 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:47.469925 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:19:47.469943 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:19:47.469961 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:19:47.469979 | orchestrator | 2026-02-14 04:19:47.469998 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-14 04:19:47.470016 | orchestrator | Saturday 14 February 2026 04:19:43 +0000 (0:00:01.286) 0:00:53.318 ***** 2026-02-14 04:19:47.470121 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:19:47.470140 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:19:47.470158 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:19:47.470195 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:19:47.470212 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:19:47.470229 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:19:47.470248 | orchestrator | 2026-02-14 04:19:47.470260 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-14 04:19:47.470272 | orchestrator | Saturday 14 February 2026 04:19:44 +0000 (0:00:01.504) 0:00:54.823 ***** 2026-02-14 04:19:47.470284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.470297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.470310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:47.470343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:48.386540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:48.386651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:19:48.386698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:48.386715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:48.386729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:19:48.386743 | orchestrator | 2026-02-14 04:19:48.386757 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-14 04:19:48.386772 | orchestrator | Saturday 14 February 2026 04:19:47 +0000 (0:00:02.624) 0:00:57.447 ***** 2026-02-14 04:19:48.386800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:48.386834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:48.386859 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:48.386875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:48.386888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:48.386901 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:48.386913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:48.386925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:48.386938 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:48.386959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:48.386972 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:48.386993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200148 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:52.200241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200254 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:52.200261 | orchestrator | 2026-02-14 04:19:52.200269 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-14 04:19:52.200276 | orchestrator | Saturday 14 February 2026 04:19:48 +0000 (0:00:00.927) 0:00:58.375 ***** 2026-02-14 04:19:52.200283 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:52.200289 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:52.200295 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:52.200302 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:52.200308 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:52.200314 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:52.200321 | orchestrator | 2026-02-14 04:19:52.200327 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-14 04:19:52.200334 | orchestrator | Saturday 14 February 2026 04:19:49 +0000 (0:00:00.969) 0:00:59.345 ***** 2026-02-14 04:19:52.200341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:52.200370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:52.200467 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:19:52.200474 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:19:52.200494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 04:19:52.200508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200515 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:19:52.200522 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:19:52.200529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200537 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:19:52.200550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-14 04:19:52.200564 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:19:52.200571 | orchestrator | 2026-02-14 04:19:52.200579 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-14 04:19:52.200586 | orchestrator | Saturday 14 February 2026 04:19:50 +0000 (0:00:00.966) 0:01:00.311 ***** 2026-02-14 04:19:52.200600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198576 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-14 04:20:27.198641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:20:27.198665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:20:27.198674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-14 04:20:27.198683 | orchestrator | 2026-02-14 04:20:27.198693 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-14 04:20:27.198703 | orchestrator | Saturday 14 February 2026 04:19:52 +0000 (0:00:01.870) 0:01:02.182 ***** 2026-02-14 04:20:27.198712 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:20:27.198722 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:20:27.198731 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:20:27.198740 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:20:27.198748 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:20:27.198757 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:20:27.198765 | orchestrator | 2026-02-14 04:20:27.198774 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-14 04:20:27.198783 | orchestrator | Saturday 14 February 2026 04:19:52 +0000 (0:00:00.632) 0:01:02.814 ***** 2026-02-14 04:20:27.198790 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:20:27.198799 | orchestrator | 2026-02-14 04:20:27.198807 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198822 | orchestrator | Saturday 14 February 2026 04:19:57 +0000 (0:00:04.814) 0:01:07.629 ***** 2026-02-14 04:20:27.198830 | orchestrator | 2026-02-14 04:20:27.198839 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198848 | orchestrator | Saturday 14 February 2026 04:19:57 +0000 (0:00:00.073) 0:01:07.702 ***** 2026-02-14 04:20:27.198856 | orchestrator | 2026-02-14 04:20:27.198865 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198872 | orchestrator | Saturday 14 February 2026 04:19:57 +0000 (0:00:00.070) 0:01:07.773 ***** 2026-02-14 04:20:27.198881 | orchestrator | 2026-02-14 04:20:27.198889 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198897 | orchestrator | Saturday 14 February 2026 04:19:58 +0000 (0:00:00.262) 0:01:08.035 ***** 2026-02-14 04:20:27.198905 | orchestrator | 2026-02-14 04:20:27.198913 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198922 | orchestrator | Saturday 14 February 2026 04:19:58 +0000 (0:00:00.070) 0:01:08.106 ***** 2026-02-14 04:20:27.198930 | orchestrator | 2026-02-14 04:20:27.198938 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-14 04:20:27.198947 | orchestrator | Saturday 14 February 2026 04:19:58 +0000 (0:00:00.066) 0:01:08.173 ***** 2026-02-14 04:20:27.198955 | orchestrator | 2026-02-14 04:20:27.198964 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-14 04:20:27.198972 | orchestrator | Saturday 14 February 2026 04:19:58 +0000 (0:00:00.082) 0:01:08.256 ***** 2026-02-14 04:20:27.198981 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:20:27.198994 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:20:27.199000 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:20:27.199006 | orchestrator | 2026-02-14 04:20:27.199012 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-14 04:20:27.199018 | orchestrator | Saturday 14 February 2026 04:20:05 +0000 (0:00:07.614) 0:01:15.870 ***** 2026-02-14 04:20:27.199024 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:20:27.199030 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:20:27.199039 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:20:27.199047 | orchestrator | 2026-02-14 04:20:27.199056 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-14 04:20:27.199065 | orchestrator | Saturday 14 February 2026 04:20:15 +0000 (0:00:10.048) 0:01:25.919 ***** 2026-02-14 04:20:27.199074 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:20:27.199083 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:20:27.199092 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:20:27.199101 | orchestrator | 2026-02-14 04:20:27.199109 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:20:27.199120 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-14 04:20:27.199131 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 04:20:27.199148 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 04:20:27.849494 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-14 04:20:27.849578 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-14 04:20:27.849589 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-14 04:20:27.849597 | orchestrator | 2026-02-14 04:20:27.849605 | orchestrator | 2026-02-14 04:20:27.849614 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:20:27.849643 | orchestrator | Saturday 14 February 2026 04:20:27 +0000 (0:00:11.253) 0:01:37.172 ***** 2026-02-14 04:20:27.849651 | orchestrator | =============================================================================== 2026-02-14 04:20:27.849659 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.25s 2026-02-14 04:20:27.849666 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 10.05s 2026-02-14 04:20:27.849674 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.61s 2026-02-14 04:20:27.849681 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.81s 2026-02-14 04:20:27.849689 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.23s 2026-02-14 04:20:27.849696 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.06s 2026-02-14 04:20:27.849703 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.92s 2026-02-14 04:20:27.849710 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.86s 2026-02-14 04:20:27.849718 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.16s 2026-02-14 04:20:27.849725 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.62s 2026-02-14 04:20:27.849732 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.48s 2026-02-14 04:20:27.849739 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.22s 2026-02-14 04:20:27.849746 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.87s 2026-02-14 04:20:27.849753 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.56s 2026-02-14 04:20:27.849761 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.53s 2026-02-14 04:20:27.849769 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.50s 2026-02-14 04:20:27.849776 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.50s 2026-02-14 04:20:27.849783 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.48s 2026-02-14 04:20:27.849790 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.46s 2026-02-14 04:20:27.849798 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.41s 2026-02-14 04:20:30.151773 | orchestrator | 2026-02-14 04:20:30 | INFO  | Task a7b41525-663e-43ab-bccf-4c9ee0d82adc (aodh) was prepared for execution. 2026-02-14 04:20:30.151875 | orchestrator | 2026-02-14 04:20:30 | INFO  | It takes a moment until task a7b41525-663e-43ab-bccf-4c9ee0d82adc (aodh) has been started and output is visible here. 2026-02-14 04:21:01.636043 | orchestrator | 2026-02-14 04:21:01.636152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:21:01.636168 | orchestrator | 2026-02-14 04:21:01.636179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:21:01.636189 | orchestrator | Saturday 14 February 2026 04:20:34 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-02-14 04:21:01.636200 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:21:01.636226 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:21:01.636236 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:21:01.636246 | orchestrator | 2026-02-14 04:21:01.636256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:21:01.636266 | orchestrator | Saturday 14 February 2026 04:20:34 +0000 (0:00:00.330) 0:00:00.591 ***** 2026-02-14 04:21:01.636276 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-14 04:21:01.636286 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-14 04:21:01.636296 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-14 04:21:01.636306 | orchestrator | 2026-02-14 04:21:01.636316 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-14 04:21:01.636325 | orchestrator | 2026-02-14 04:21:01.636425 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-14 04:21:01.636461 | orchestrator | Saturday 14 February 2026 04:20:35 +0000 (0:00:00.460) 0:00:01.052 ***** 2026-02-14 04:21:01.636471 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:21:01.636482 | orchestrator | 2026-02-14 04:21:01.636492 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-14 04:21:01.636502 | orchestrator | Saturday 14 February 2026 04:20:35 +0000 (0:00:00.547) 0:00:01.599 ***** 2026-02-14 04:21:01.636512 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-14 04:21:01.636522 | orchestrator | 2026-02-14 04:21:01.636532 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-14 04:21:01.636542 | orchestrator | Saturday 14 February 2026 04:20:39 +0000 (0:00:03.353) 0:00:04.953 ***** 2026-02-14 04:21:01.636551 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-14 04:21:01.636561 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-14 04:21:01.636571 | orchestrator | 2026-02-14 04:21:01.636580 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-14 04:21:01.636592 | orchestrator | Saturday 14 February 2026 04:20:45 +0000 (0:00:06.341) 0:00:11.294 ***** 2026-02-14 04:21:01.636603 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:21:01.636616 | orchestrator | 2026-02-14 04:21:01.636626 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-14 04:21:01.636638 | orchestrator | Saturday 14 February 2026 04:20:48 +0000 (0:00:03.421) 0:00:14.716 ***** 2026-02-14 04:21:01.636649 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:21:01.636660 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-14 04:21:01.636671 | orchestrator | 2026-02-14 04:21:01.636682 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-14 04:21:01.636693 | orchestrator | Saturday 14 February 2026 04:20:52 +0000 (0:00:03.847) 0:00:18.563 ***** 2026-02-14 04:21:01.636705 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:21:01.636716 | orchestrator | 2026-02-14 04:21:01.636727 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-14 04:21:01.636739 | orchestrator | Saturday 14 February 2026 04:20:55 +0000 (0:00:03.204) 0:00:21.768 ***** 2026-02-14 04:21:01.636750 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-14 04:21:01.636761 | orchestrator | 2026-02-14 04:21:01.636772 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-14 04:21:01.636783 | orchestrator | Saturday 14 February 2026 04:20:59 +0000 (0:00:03.763) 0:00:25.532 ***** 2026-02-14 04:21:01.636798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:01.636837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:01.636859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:01.636872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:01.636884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:01.636896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:01.636908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:01.636928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:02.880558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:02.880689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:02.880709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:02.880720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:02.880731 | orchestrator | 2026-02-14 04:21:02.880742 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-14 04:21:02.880939 | orchestrator | Saturday 14 February 2026 04:21:01 +0000 (0:00:01.954) 0:00:27.486 ***** 2026-02-14 04:21:02.880955 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:21:02.880967 | orchestrator | 2026-02-14 04:21:02.880977 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-14 04:21:02.880987 | orchestrator | Saturday 14 February 2026 04:21:01 +0000 (0:00:00.138) 0:00:27.625 ***** 2026-02-14 04:21:02.880997 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:21:02.881005 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:21:02.881014 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:21:02.881024 | orchestrator | 2026-02-14 04:21:02.881036 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-14 04:21:02.881045 | orchestrator | Saturday 14 February 2026 04:21:02 +0000 (0:00:00.487) 0:00:28.113 ***** 2026-02-14 04:21:02.881056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:02.881117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:02.881130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:02.881140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:02.881149 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:21:02.881160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:02.881179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:02.881197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:02.881220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:07.965505 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:21:07.965619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:07.965636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:07.965647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:07.965656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:07.965682 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:21:07.965693 | orchestrator | 2026-02-14 04:21:07.965727 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-14 04:21:07.965738 | orchestrator | Saturday 14 February 2026 04:21:02 +0000 (0:00:00.622) 0:00:28.735 ***** 2026-02-14 04:21:07.965748 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:21:07.965757 | orchestrator | 2026-02-14 04:21:07.965766 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-14 04:21:07.965775 | orchestrator | Saturday 14 February 2026 04:21:03 +0000 (0:00:00.700) 0:00:29.435 ***** 2026-02-14 04:21:07.965784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:07.965814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:07.965825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:07.965834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:07.965851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:07.965860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:07.965869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:07.965890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:08.613874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:08.613977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:08.613995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:08.614090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:08.614105 | orchestrator | 2026-02-14 04:21:08.614146 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-14 04:21:08.614161 | orchestrator | Saturday 14 February 2026 04:21:07 +0000 (0:00:04.387) 0:00:33.823 ***** 2026-02-14 04:21:08.614175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:08.614201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:08.614243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:08.614257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:08.614268 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:21:08.614284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:08.614311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:08.614323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:08.614409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:08.614425 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:21:08.614449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:09.656381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:09.656506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656518 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:21:09.656524 | orchestrator | 2026-02-14 04:21:09.656529 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-14 04:21:09.656534 | orchestrator | Saturday 14 February 2026 04:21:08 +0000 (0:00:00.646) 0:00:34.469 ***** 2026-02-14 04:21:09.656539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:09.656553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:09.656557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656582 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:21:09.656586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:09.656590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:09.656594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:09.656605 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:21:09.656613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-14 04:21:13.778269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 04:21:13.778416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 04:21:13.778433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 04:21:13.778446 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:21:13.778481 | orchestrator | 2026-02-14 04:21:13.778493 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-14 04:21:13.778504 | orchestrator | Saturday 14 February 2026 04:21:09 +0000 (0:00:01.045) 0:00:35.515 ***** 2026-02-14 04:21:13.778515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:13.778547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:13.778580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:13.778614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:13.778699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321462 | orchestrator | 2026-02-14 04:21:22.321476 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-14 04:21:22.321490 | orchestrator | Saturday 14 February 2026 04:21:13 +0000 (0:00:04.113) 0:00:39.628 ***** 2026-02-14 04:21:22.321502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:22.321531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:22.321568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:22.321599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:22.321703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.455767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.455862 | orchestrator | 2026-02-14 04:21:27.455876 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-14 04:21:27.455887 | orchestrator | Saturday 14 February 2026 04:21:22 +0000 (0:00:08.534) 0:00:48.162 ***** 2026-02-14 04:21:27.455898 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:21:27.455906 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:21:27.455912 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:21:27.455917 | orchestrator | 2026-02-14 04:21:27.455923 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-14 04:21:27.455929 | orchestrator | Saturday 14 February 2026 04:21:24 +0000 (0:00:01.849) 0:00:50.012 ***** 2026-02-14 04:21:27.455936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:27.455970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:27.455977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-14 04:21:27.455994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:21:27.456049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:22:08.066262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-14 04:22:08.066438 | orchestrator | 2026-02-14 04:22:08.066458 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-14 04:22:08.066472 | orchestrator | Saturday 14 February 2026 04:21:27 +0000 (0:00:03.302) 0:00:53.314 ***** 2026-02-14 04:22:08.066483 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:22:08.066496 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:22:08.066507 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:22:08.066518 | orchestrator | 2026-02-14 04:22:08.066529 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-14 04:22:08.066540 | orchestrator | Saturday 14 February 2026 04:21:27 +0000 (0:00:00.305) 0:00:53.619 ***** 2026-02-14 04:22:08.066574 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066585 | orchestrator | 2026-02-14 04:22:08.066596 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-14 04:22:08.066607 | orchestrator | Saturday 14 February 2026 04:21:29 +0000 (0:00:02.176) 0:00:55.796 ***** 2026-02-14 04:22:08.066618 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066629 | orchestrator | 2026-02-14 04:22:08.066639 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-14 04:22:08.066650 | orchestrator | Saturday 14 February 2026 04:21:32 +0000 (0:00:02.350) 0:00:58.146 ***** 2026-02-14 04:22:08.066661 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066672 | orchestrator | 2026-02-14 04:22:08.066682 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-14 04:22:08.066693 | orchestrator | Saturday 14 February 2026 04:21:45 +0000 (0:00:13.396) 0:01:11.543 ***** 2026-02-14 04:22:08.066704 | orchestrator | 2026-02-14 04:22:08.066714 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-14 04:22:08.066725 | orchestrator | Saturday 14 February 2026 04:21:45 +0000 (0:00:00.075) 0:01:11.618 ***** 2026-02-14 04:22:08.066736 | orchestrator | 2026-02-14 04:22:08.066761 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-14 04:22:08.066772 | orchestrator | Saturday 14 February 2026 04:21:45 +0000 (0:00:00.076) 0:01:11.694 ***** 2026-02-14 04:22:08.066784 | orchestrator | 2026-02-14 04:22:08.066796 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-14 04:22:08.066810 | orchestrator | Saturday 14 February 2026 04:21:46 +0000 (0:00:00.265) 0:01:11.960 ***** 2026-02-14 04:22:08.066822 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066835 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:22:08.066847 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:22:08.066860 | orchestrator | 2026-02-14 04:22:08.066872 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-14 04:22:08.066885 | orchestrator | Saturday 14 February 2026 04:21:51 +0000 (0:00:05.648) 0:01:17.609 ***** 2026-02-14 04:22:08.066897 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066911 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:22:08.066923 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:22:08.066935 | orchestrator | 2026-02-14 04:22:08.066948 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-14 04:22:08.066960 | orchestrator | Saturday 14 February 2026 04:21:57 +0000 (0:00:05.271) 0:01:22.880 ***** 2026-02-14 04:22:08.066972 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.066985 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:22:08.066997 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:22:08.067009 | orchestrator | 2026-02-14 04:22:08.067022 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-14 04:22:08.067036 | orchestrator | Saturday 14 February 2026 04:22:02 +0000 (0:00:05.328) 0:01:28.208 ***** 2026-02-14 04:22:08.067048 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:22:08.067062 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:22:08.067074 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:22:08.067086 | orchestrator | 2026-02-14 04:22:08.067099 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:22:08.067112 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:22:08.067126 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:22:08.067139 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:22:08.067152 | orchestrator | 2026-02-14 04:22:08.067163 | orchestrator | 2026-02-14 04:22:08.067174 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:22:08.067193 | orchestrator | Saturday 14 February 2026 04:22:07 +0000 (0:00:05.383) 0:01:33.591 ***** 2026-02-14 04:22:08.067203 | orchestrator | =============================================================================== 2026-02-14 04:22:08.067214 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.40s 2026-02-14 04:22:08.067225 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.53s 2026-02-14 04:22:08.067253 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.34s 2026-02-14 04:22:08.067264 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.65s 2026-02-14 04:22:08.067275 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.38s 2026-02-14 04:22:08.067286 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.33s 2026-02-14 04:22:08.067337 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.27s 2026-02-14 04:22:08.067348 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.39s 2026-02-14 04:22:08.067359 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.11s 2026-02-14 04:22:08.067370 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.85s 2026-02-14 04:22:08.067380 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.76s 2026-02-14 04:22:08.067391 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.42s 2026-02-14 04:22:08.067402 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.35s 2026-02-14 04:22:08.067413 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.30s 2026-02-14 04:22:08.067424 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.20s 2026-02-14 04:22:08.067435 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.35s 2026-02-14 04:22:08.067445 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.18s 2026-02-14 04:22:08.067456 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.95s 2026-02-14 04:22:08.067467 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.85s 2026-02-14 04:22:08.067478 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.05s 2026-02-14 04:22:10.456485 | orchestrator | 2026-02-14 04:22:10 | INFO  | Task 85cbcfb1-7a01-4224-925f-6b4e32cbf9e0 (kolla-ceph-rgw) was prepared for execution. 2026-02-14 04:22:10.456557 | orchestrator | 2026-02-14 04:22:10 | INFO  | It takes a moment until task 85cbcfb1-7a01-4224-925f-6b4e32cbf9e0 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-14 04:22:45.702251 | orchestrator | 2026-02-14 04:22:45.702405 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:22:45.702422 | orchestrator | 2026-02-14 04:22:45.702434 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:22:45.702462 | orchestrator | Saturday 14 February 2026 04:22:14 +0000 (0:00:00.292) 0:00:00.292 ***** 2026-02-14 04:22:45.702474 | orchestrator | ok: [testbed-manager] 2026-02-14 04:22:45.702487 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:22:45.702498 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:22:45.702509 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:22:45.702520 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:22:45.702531 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:22:45.702542 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:22:45.702553 | orchestrator | 2026-02-14 04:22:45.702564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:22:45.702575 | orchestrator | Saturday 14 February 2026 04:22:15 +0000 (0:00:00.865) 0:00:01.158 ***** 2026-02-14 04:22:45.702587 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702598 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702609 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702643 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702655 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702666 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702677 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-14 04:22:45.702688 | orchestrator | 2026-02-14 04:22:45.702699 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-14 04:22:45.702710 | orchestrator | 2026-02-14 04:22:45.702721 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-14 04:22:45.702732 | orchestrator | Saturday 14 February 2026 04:22:16 +0000 (0:00:00.766) 0:00:01.925 ***** 2026-02-14 04:22:45.702743 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:22:45.702756 | orchestrator | 2026-02-14 04:22:45.702767 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-14 04:22:45.702779 | orchestrator | Saturday 14 February 2026 04:22:17 +0000 (0:00:01.573) 0:00:03.498 ***** 2026-02-14 04:22:45.702792 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-14 04:22:45.702804 | orchestrator | 2026-02-14 04:22:45.702817 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-14 04:22:45.702829 | orchestrator | Saturday 14 February 2026 04:22:21 +0000 (0:00:03.639) 0:00:07.138 ***** 2026-02-14 04:22:45.702842 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-14 04:22:45.702856 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-14 04:22:45.702868 | orchestrator | 2026-02-14 04:22:45.702880 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-14 04:22:45.702893 | orchestrator | Saturday 14 February 2026 04:22:27 +0000 (0:00:06.193) 0:00:13.332 ***** 2026-02-14 04:22:45.702906 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-14 04:22:45.702919 | orchestrator | 2026-02-14 04:22:45.702931 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-14 04:22:45.702944 | orchestrator | Saturday 14 February 2026 04:22:30 +0000 (0:00:03.097) 0:00:16.430 ***** 2026-02-14 04:22:45.702956 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:22:45.702969 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-14 04:22:45.702981 | orchestrator | 2026-02-14 04:22:45.702994 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-14 04:22:45.703006 | orchestrator | Saturday 14 February 2026 04:22:34 +0000 (0:00:03.735) 0:00:20.165 ***** 2026-02-14 04:22:45.703018 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-14 04:22:45.703031 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-14 04:22:45.703044 | orchestrator | 2026-02-14 04:22:45.703056 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-14 04:22:45.703069 | orchestrator | Saturday 14 February 2026 04:22:40 +0000 (0:00:06.030) 0:00:26.196 ***** 2026-02-14 04:22:45.703081 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-14 04:22:45.703093 | orchestrator | 2026-02-14 04:22:45.703106 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:22:45.703118 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703132 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703144 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703163 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703175 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703204 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703217 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:45.703228 | orchestrator | 2026-02-14 04:22:45.703239 | orchestrator | 2026-02-14 04:22:45.703255 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:22:45.703267 | orchestrator | Saturday 14 February 2026 04:22:45 +0000 (0:00:04.723) 0:00:30.919 ***** 2026-02-14 04:22:45.703298 | orchestrator | =============================================================================== 2026-02-14 04:22:45.703310 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.19s 2026-02-14 04:22:45.703321 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.03s 2026-02-14 04:22:45.703331 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.72s 2026-02-14 04:22:45.703343 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.74s 2026-02-14 04:22:45.703353 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.64s 2026-02-14 04:22:45.703364 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.10s 2026-02-14 04:22:45.703375 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.57s 2026-02-14 04:22:45.703386 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-02-14 04:22:45.703397 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-02-14 04:22:48.014345 | orchestrator | 2026-02-14 04:22:48 | INFO  | Task e2acdf88-fa9d-4284-adc2-0d4e2e8ec55b (gnocchi) was prepared for execution. 2026-02-14 04:22:48.014444 | orchestrator | 2026-02-14 04:22:48 | INFO  | It takes a moment until task e2acdf88-fa9d-4284-adc2-0d4e2e8ec55b (gnocchi) has been started and output is visible here. 2026-02-14 04:22:53.183199 | orchestrator | 2026-02-14 04:22:53.183373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:22:53.183390 | orchestrator | 2026-02-14 04:22:53.183398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:22:53.183407 | orchestrator | Saturday 14 February 2026 04:22:52 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-02-14 04:22:53.183414 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:22:53.183423 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:22:53.183430 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:22:53.183438 | orchestrator | 2026-02-14 04:22:53.183445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:22:53.183453 | orchestrator | Saturday 14 February 2026 04:22:52 +0000 (0:00:00.337) 0:00:00.603 ***** 2026-02-14 04:22:53.183460 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-14 04:22:53.183469 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-14 04:22:53.183477 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-14 04:22:53.183484 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-14 04:22:53.183492 | orchestrator | 2026-02-14 04:22:53.183499 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-14 04:22:53.183507 | orchestrator | skipping: no hosts matched 2026-02-14 04:22:53.183514 | orchestrator | 2026-02-14 04:22:53.183522 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:22:53.183530 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:53.183563 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:53.183571 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:22:53.183578 | orchestrator | 2026-02-14 04:22:53.183586 | orchestrator | 2026-02-14 04:22:53.183593 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:22:53.183600 | orchestrator | Saturday 14 February 2026 04:22:52 +0000 (0:00:00.347) 0:00:00.951 ***** 2026-02-14 04:22:53.183608 | orchestrator | =============================================================================== 2026-02-14 04:22:53.183615 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-02-14 04:22:53.183626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-14 04:22:55.566086 | orchestrator | 2026-02-14 04:22:55 | INFO  | Task 50f1157e-5fff-48d3-9a90-609b940034ed (manila) was prepared for execution. 2026-02-14 04:22:55.566213 | orchestrator | 2026-02-14 04:22:55 | INFO  | It takes a moment until task 50f1157e-5fff-48d3-9a90-609b940034ed (manila) has been started and output is visible here. 2026-02-14 04:23:38.665548 | orchestrator | 2026-02-14 04:23:38.665684 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:23:38.665703 | orchestrator | 2026-02-14 04:23:38.665716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:23:38.665728 | orchestrator | Saturday 14 February 2026 04:22:59 +0000 (0:00:00.262) 0:00:00.262 ***** 2026-02-14 04:23:38.665740 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:23:38.665752 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:23:38.665777 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:23:38.665789 | orchestrator | 2026-02-14 04:23:38.665800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:23:38.665811 | orchestrator | Saturday 14 February 2026 04:23:00 +0000 (0:00:00.318) 0:00:00.580 ***** 2026-02-14 04:23:38.665826 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-14 04:23:38.665851 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-14 04:23:38.665876 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-14 04:23:38.665895 | orchestrator | 2026-02-14 04:23:38.665935 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-14 04:23:38.665955 | orchestrator | 2026-02-14 04:23:38.665975 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-14 04:23:38.665995 | orchestrator | Saturday 14 February 2026 04:23:00 +0000 (0:00:00.471) 0:00:01.051 ***** 2026-02-14 04:23:38.666097 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:23:38.666269 | orchestrator | 2026-02-14 04:23:38.666293 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-14 04:23:38.666313 | orchestrator | Saturday 14 February 2026 04:23:01 +0000 (0:00:00.559) 0:00:01.610 ***** 2026-02-14 04:23:38.666331 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:23:38.666351 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:23:38.666370 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:23:38.666389 | orchestrator | 2026-02-14 04:23:38.666408 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-14 04:23:38.666427 | orchestrator | Saturday 14 February 2026 04:23:01 +0000 (0:00:00.456) 0:00:02.067 ***** 2026-02-14 04:23:38.666444 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-14 04:23:38.666462 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-14 04:23:38.666481 | orchestrator | 2026-02-14 04:23:38.666499 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-14 04:23:38.666554 | orchestrator | Saturday 14 February 2026 04:23:08 +0000 (0:00:06.828) 0:00:08.895 ***** 2026-02-14 04:23:38.666574 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-14 04:23:38.666594 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-14 04:23:38.666613 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-14 04:23:38.666631 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-14 04:23:38.666649 | orchestrator | 2026-02-14 04:23:38.666668 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-14 04:23:38.666687 | orchestrator | Saturday 14 February 2026 04:23:21 +0000 (0:00:13.248) 0:00:22.144 ***** 2026-02-14 04:23:38.666705 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:23:38.666724 | orchestrator | 2026-02-14 04:23:38.666745 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-14 04:23:38.666764 | orchestrator | Saturday 14 February 2026 04:23:24 +0000 (0:00:03.350) 0:00:25.495 ***** 2026-02-14 04:23:38.666781 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:23:38.666800 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-14 04:23:38.666818 | orchestrator | 2026-02-14 04:23:38.666836 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-14 04:23:38.666854 | orchestrator | Saturday 14 February 2026 04:23:28 +0000 (0:00:04.037) 0:00:29.532 ***** 2026-02-14 04:23:38.666873 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:23:38.666891 | orchestrator | 2026-02-14 04:23:38.666910 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-14 04:23:38.666929 | orchestrator | Saturday 14 February 2026 04:23:32 +0000 (0:00:03.471) 0:00:33.004 ***** 2026-02-14 04:23:38.666949 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-14 04:23:38.666967 | orchestrator | 2026-02-14 04:23:38.666986 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-14 04:23:38.667004 | orchestrator | Saturday 14 February 2026 04:23:36 +0000 (0:00:03.921) 0:00:36.925 ***** 2026-02-14 04:23:38.667057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:38.667097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:38.667135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:38.667156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:38.667176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:38.667196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:38.667231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:49.407830 | orchestrator | 2026-02-14 04:23:49.407843 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-14 04:23:49.407856 | orchestrator | Saturday 14 February 2026 04:23:38 +0000 (0:00:02.387) 0:00:39.313 ***** 2026-02-14 04:23:49.407867 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:23:49.407878 | orchestrator | 2026-02-14 04:23:49.407889 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-14 04:23:49.407899 | orchestrator | Saturday 14 February 2026 04:23:39 +0000 (0:00:00.578) 0:00:39.891 ***** 2026-02-14 04:23:49.407910 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:23:49.407922 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:23:49.407933 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:23:49.407944 | orchestrator | 2026-02-14 04:23:49.407954 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-14 04:23:49.407965 | orchestrator | Saturday 14 February 2026 04:23:40 +0000 (0:00:00.923) 0:00:40.815 ***** 2026-02-14 04:23:49.407977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408028 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408058 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408068 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408079 | orchestrator | 2026-02-14 04:23:49.408090 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-14 04:23:49.408101 | orchestrator | Saturday 14 February 2026 04:23:42 +0000 (0:00:01.901) 0:00:42.716 ***** 2026-02-14 04:23:49.408112 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408126 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408150 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-14 04:23:49.408174 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-14 04:23:49.408186 | orchestrator | 2026-02-14 04:23:49.408198 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-14 04:23:49.408211 | orchestrator | Saturday 14 February 2026 04:23:43 +0000 (0:00:01.234) 0:00:43.951 ***** 2026-02-14 04:23:49.408224 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-14 04:23:49.408266 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-14 04:23:49.408283 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-14 04:23:49.408296 | orchestrator | 2026-02-14 04:23:49.408309 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-14 04:23:49.408321 | orchestrator | Saturday 14 February 2026 04:23:44 +0000 (0:00:00.730) 0:00:44.681 ***** 2026-02-14 04:23:49.408333 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:23:49.408345 | orchestrator | 2026-02-14 04:23:49.408358 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-14 04:23:49.408370 | orchestrator | Saturday 14 February 2026 04:23:44 +0000 (0:00:00.131) 0:00:44.813 ***** 2026-02-14 04:23:49.408382 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:23:49.408394 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:23:49.408406 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:23:49.408419 | orchestrator | 2026-02-14 04:23:49.408432 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-14 04:23:49.408444 | orchestrator | Saturday 14 February 2026 04:23:44 +0000 (0:00:00.493) 0:00:45.306 ***** 2026-02-14 04:23:49.408465 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:23:49.408476 | orchestrator | 2026-02-14 04:23:49.408488 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-14 04:23:49.408499 | orchestrator | Saturday 14 February 2026 04:23:45 +0000 (0:00:00.580) 0:00:45.887 ***** 2026-02-14 04:23:49.408518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:50.275185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:50.275351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:23:50.275380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:23:50.275636 | orchestrator | 2026-02-14 04:23:50.275655 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-14 04:23:50.275674 | orchestrator | Saturday 14 February 2026 04:23:49 +0000 (0:00:04.133) 0:00:50.020 ***** 2026-02-14 04:23:50.275707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:50.927130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927331 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:23:50.927351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:50.927389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927452 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:23:50.927463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:50.927475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:50.927516 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:23:50.927528 | orchestrator | 2026-02-14 04:23:50.927540 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-14 04:23:50.927552 | orchestrator | Saturday 14 February 2026 04:23:50 +0000 (0:00:00.873) 0:00:50.894 ***** 2026-02-14 04:23:50.927577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:55.515631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515802 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:23:55.515817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:55.515829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515897 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:23:55.515909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:23:55.515932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:23:55.515968 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:23:55.515980 | orchestrator | 2026-02-14 04:23:55.515992 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-14 04:23:55.516005 | orchestrator | Saturday 14 February 2026 04:23:51 +0000 (0:00:00.869) 0:00:51.763 ***** 2026-02-14 04:23:55.516031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:02.201065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:02.201187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:02.201202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:02.201381 | orchestrator | 2026-02-14 04:24:02.201392 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-14 04:24:02.201407 | orchestrator | Saturday 14 February 2026 04:23:55 +0000 (0:00:04.577) 0:00:56.341 ***** 2026-02-14 04:24:02.201424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:06.477247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:06.477371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:06.477390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:06.477435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:06.477530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:06.477543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:06.477586 | orchestrator | 2026-02-14 04:24:06.477601 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-14 04:24:06.477614 | orchestrator | Saturday 14 February 2026 04:24:02 +0000 (0:00:06.513) 0:01:02.854 ***** 2026-02-14 04:24:06.477634 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-14 04:24:06.477675 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-14 04:24:06.477687 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-14 04:24:06.477698 | orchestrator | 2026-02-14 04:24:06.477709 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-14 04:24:06.477720 | orchestrator | Saturday 14 February 2026 04:24:05 +0000 (0:00:03.605) 0:01:06.460 ***** 2026-02-14 04:24:06.477745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:24:09.786337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786485 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:24:09.786516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:24:09.786549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786603 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:24:09.786615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-14 04:24:09.786627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 04:24:09.786677 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:24:09.786691 | orchestrator | 2026-02-14 04:24:09.786705 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-14 04:24:09.786718 | orchestrator | Saturday 14 February 2026 04:24:06 +0000 (0:00:00.634) 0:01:07.095 ***** 2026-02-14 04:24:09.786741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:50.790175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:50.790378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-14 04:24:50.790438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-14 04:24:50.790583 | orchestrator | 2026-02-14 04:24:50.790596 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-14 04:24:50.790609 | orchestrator | Saturday 14 February 2026 04:24:09 +0000 (0:00:03.316) 0:01:10.411 ***** 2026-02-14 04:24:50.790623 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:24:50.790637 | orchestrator | 2026-02-14 04:24:50.790650 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-14 04:24:50.790662 | orchestrator | Saturday 14 February 2026 04:24:11 +0000 (0:00:02.118) 0:01:12.530 ***** 2026-02-14 04:24:50.790675 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:24:50.790687 | orchestrator | 2026-02-14 04:24:50.790700 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-14 04:24:50.790712 | orchestrator | Saturday 14 February 2026 04:24:14 +0000 (0:00:02.329) 0:01:14.859 ***** 2026-02-14 04:24:50.790724 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:24:50.790737 | orchestrator | 2026-02-14 04:24:50.790750 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-14 04:24:50.790764 | orchestrator | Saturday 14 February 2026 04:24:50 +0000 (0:00:36.223) 0:01:51.083 ***** 2026-02-14 04:24:50.790776 | orchestrator | 2026-02-14 04:24:50.790796 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-14 04:25:41.652652 | orchestrator | Saturday 14 February 2026 04:24:50 +0000 (0:00:00.072) 0:01:51.155 ***** 2026-02-14 04:25:41.652745 | orchestrator | 2026-02-14 04:25:41.652756 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-14 04:25:41.652763 | orchestrator | Saturday 14 February 2026 04:24:50 +0000 (0:00:00.071) 0:01:51.227 ***** 2026-02-14 04:25:41.652770 | orchestrator | 2026-02-14 04:25:41.652777 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-14 04:25:41.652784 | orchestrator | Saturday 14 February 2026 04:24:50 +0000 (0:00:00.073) 0:01:51.301 ***** 2026-02-14 04:25:41.652791 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:25:41.652799 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:25:41.652806 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:25:41.652813 | orchestrator | 2026-02-14 04:25:41.652820 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-14 04:25:41.652849 | orchestrator | Saturday 14 February 2026 04:25:06 +0000 (0:00:15.362) 0:02:06.664 ***** 2026-02-14 04:25:41.652856 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:25:41.652863 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:25:41.652869 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:25:41.652876 | orchestrator | 2026-02-14 04:25:41.652883 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-14 04:25:41.652890 | orchestrator | Saturday 14 February 2026 04:25:12 +0000 (0:00:05.944) 0:02:12.608 ***** 2026-02-14 04:25:41.652897 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:25:41.652903 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:25:41.652910 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:25:41.652917 | orchestrator | 2026-02-14 04:25:41.652923 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-14 04:25:41.652930 | orchestrator | Saturday 14 February 2026 04:25:22 +0000 (0:00:10.118) 0:02:22.726 ***** 2026-02-14 04:25:41.652937 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:25:41.652943 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:25:41.652950 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:25:41.652957 | orchestrator | 2026-02-14 04:25:41.652963 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:25:41.652971 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:25:41.652979 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:25:41.652986 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:25:41.652992 | orchestrator | 2026-02-14 04:25:41.652999 | orchestrator | 2026-02-14 04:25:41.653006 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:25:41.653013 | orchestrator | Saturday 14 February 2026 04:25:41 +0000 (0:00:19.030) 0:02:41.757 ***** 2026-02-14 04:25:41.653019 | orchestrator | =============================================================================== 2026-02-14 04:25:41.653026 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.22s 2026-02-14 04:25:41.653045 | orchestrator | manila : Restart manila-share container -------------------------------- 19.03s 2026-02-14 04:25:41.653052 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.36s 2026-02-14 04:25:41.653058 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.25s 2026-02-14 04:25:41.653065 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.12s 2026-02-14 04:25:41.653071 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.83s 2026-02-14 04:25:41.653078 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.51s 2026-02-14 04:25:41.653085 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.94s 2026-02-14 04:25:41.653091 | orchestrator | manila : Copying over config.json files for services -------------------- 4.58s 2026-02-14 04:25:41.653098 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.13s 2026-02-14 04:25:41.653105 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.04s 2026-02-14 04:25:41.653112 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.92s 2026-02-14 04:25:41.653119 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.61s 2026-02-14 04:25:41.653146 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.47s 2026-02-14 04:25:41.653153 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.35s 2026-02-14 04:25:41.653160 | orchestrator | manila : Check manila containers ---------------------------------------- 3.32s 2026-02-14 04:25:41.653167 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.39s 2026-02-14 04:25:41.653207 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.33s 2026-02-14 04:25:41.653214 | orchestrator | manila : Creating Manila database --------------------------------------- 2.12s 2026-02-14 04:25:41.653223 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.90s 2026-02-14 04:25:41.949006 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-14 04:25:54.055808 | orchestrator | 2026-02-14 04:25:54 | INFO  | Task ca6503bb-8f70-492f-b03e-d7779e3e8cd7 (netdata) was prepared for execution. 2026-02-14 04:25:54.055953 | orchestrator | 2026-02-14 04:25:54 | INFO  | It takes a moment until task ca6503bb-8f70-492f-b03e-d7779e3e8cd7 (netdata) has been started and output is visible here. 2026-02-14 04:27:25.896917 | orchestrator | 2026-02-14 04:27:25.897035 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:27:25.897054 | orchestrator | 2026-02-14 04:27:25.897066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:27:25.897079 | orchestrator | Saturday 14 February 2026 04:25:58 +0000 (0:00:00.228) 0:00:00.228 ***** 2026-02-14 04:27:25.897091 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-14 04:27:25.897102 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-14 04:27:25.897113 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-14 04:27:25.897170 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-14 04:27:25.897182 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-14 04:27:25.897194 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-14 04:27:25.897205 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-14 04:27:25.897216 | orchestrator | 2026-02-14 04:27:25.897227 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-14 04:27:25.897238 | orchestrator | 2026-02-14 04:27:25.897249 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-14 04:27:25.897260 | orchestrator | Saturday 14 February 2026 04:25:59 +0000 (0:00:00.878) 0:00:01.107 ***** 2026-02-14 04:27:25.897273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:27:25.897287 | orchestrator | 2026-02-14 04:27:25.897299 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-14 04:27:25.897311 | orchestrator | Saturday 14 February 2026 04:26:00 +0000 (0:00:01.343) 0:00:02.450 ***** 2026-02-14 04:27:25.897322 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:25.897335 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:25.897346 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:25.897357 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:25.897368 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:25.897379 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:25.897390 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:25.897401 | orchestrator | 2026-02-14 04:27:25.897412 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-14 04:27:25.897423 | orchestrator | Saturday 14 February 2026 04:26:02 +0000 (0:00:01.832) 0:00:04.282 ***** 2026-02-14 04:27:25.897434 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:25.897446 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:25.897459 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:25.897471 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:25.897484 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:25.897496 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:25.897509 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:25.897521 | orchestrator | 2026-02-14 04:27:25.897534 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-14 04:27:25.897579 | orchestrator | Saturday 14 February 2026 04:26:04 +0000 (0:00:02.193) 0:00:06.475 ***** 2026-02-14 04:27:25.897599 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.897619 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:27:25.897657 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:27:25.897676 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:27:25.897696 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:27:25.897715 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:27:25.897733 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:27:25.897752 | orchestrator | 2026-02-14 04:27:25.897771 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-14 04:27:25.897791 | orchestrator | Saturday 14 February 2026 04:26:06 +0000 (0:00:01.513) 0:00:07.989 ***** 2026-02-14 04:27:25.897810 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.897828 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:27:25.897843 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:27:25.897854 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:27:25.897864 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:27:25.897875 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:27:25.897886 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:27:25.897896 | orchestrator | 2026-02-14 04:27:25.897907 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-14 04:27:25.897919 | orchestrator | Saturday 14 February 2026 04:26:21 +0000 (0:00:15.017) 0:00:23.007 ***** 2026-02-14 04:27:25.897930 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.897940 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:27:25.897951 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:27:25.897962 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:27:25.897973 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:27:25.897984 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:27:25.897994 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:27:25.898005 | orchestrator | 2026-02-14 04:27:25.898077 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-14 04:27:25.898091 | orchestrator | Saturday 14 February 2026 04:27:00 +0000 (0:00:39.127) 0:01:02.134 ***** 2026-02-14 04:27:25.898103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:27:25.898116 | orchestrator | 2026-02-14 04:27:25.898212 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-14 04:27:25.898227 | orchestrator | Saturday 14 February 2026 04:27:01 +0000 (0:00:01.571) 0:01:03.706 ***** 2026-02-14 04:27:25.898238 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-14 04:27:25.898250 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-14 04:27:25.898261 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-14 04:27:25.898272 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-14 04:27:25.898304 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-14 04:27:25.898315 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-14 04:27:25.898326 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-14 04:27:25.898337 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-14 04:27:25.898348 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-14 04:27:25.898359 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-14 04:27:25.898370 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-14 04:27:25.898380 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-14 04:27:25.898391 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-14 04:27:25.898402 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-14 04:27:25.898413 | orchestrator | 2026-02-14 04:27:25.898424 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-14 04:27:25.898449 | orchestrator | Saturday 14 February 2026 04:27:05 +0000 (0:00:03.403) 0:01:07.109 ***** 2026-02-14 04:27:25.898460 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:25.898471 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:25.898481 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:25.898492 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:25.898503 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:25.898514 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:25.898524 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:25.898535 | orchestrator | 2026-02-14 04:27:25.898546 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-14 04:27:25.898557 | orchestrator | Saturday 14 February 2026 04:27:06 +0000 (0:00:01.278) 0:01:08.388 ***** 2026-02-14 04:27:25.898568 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.898579 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:27:25.898590 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:27:25.898601 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:27:25.898612 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:27:25.898623 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:27:25.898634 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:27:25.898645 | orchestrator | 2026-02-14 04:27:25.898656 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-14 04:27:25.898667 | orchestrator | Saturday 14 February 2026 04:27:07 +0000 (0:00:01.323) 0:01:09.711 ***** 2026-02-14 04:27:25.898678 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:25.898689 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:25.898699 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:25.898710 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:25.898721 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:25.898732 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:25.898743 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:25.898753 | orchestrator | 2026-02-14 04:27:25.898764 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-14 04:27:25.898775 | orchestrator | Saturday 14 February 2026 04:27:09 +0000 (0:00:01.236) 0:01:10.947 ***** 2026-02-14 04:27:25.898786 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:25.898797 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:25.898808 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:25.898818 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:25.898829 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:25.898840 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:25.898850 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:25.898861 | orchestrator | 2026-02-14 04:27:25.898872 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-14 04:27:25.898892 | orchestrator | Saturday 14 February 2026 04:27:10 +0000 (0:00:01.676) 0:01:12.624 ***** 2026-02-14 04:27:25.898903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-14 04:27:25.898917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:27:25.898928 | orchestrator | 2026-02-14 04:27:25.898939 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-14 04:27:25.898950 | orchestrator | Saturday 14 February 2026 04:27:12 +0000 (0:00:01.415) 0:01:14.039 ***** 2026-02-14 04:27:25.898961 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.898972 | orchestrator | 2026-02-14 04:27:25.898983 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-14 04:27:25.898995 | orchestrator | Saturday 14 February 2026 04:27:14 +0000 (0:00:02.145) 0:01:16.185 ***** 2026-02-14 04:27:25.899005 | orchestrator | changed: [testbed-manager] 2026-02-14 04:27:25.899016 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:27:25.899027 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:27:25.899044 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:27:25.899055 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:27:25.899066 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:27:25.899077 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:27:25.899088 | orchestrator | 2026-02-14 04:27:25.899099 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:27:25.899110 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:25.899122 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:25.899152 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:25.899163 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:25.899182 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:26.327792 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:26.327894 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:27:26.327910 | orchestrator | 2026-02-14 04:27:26.327922 | orchestrator | 2026-02-14 04:27:26.327934 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:27:26.327946 | orchestrator | Saturday 14 February 2026 04:27:25 +0000 (0:00:11.564) 0:01:27.750 ***** 2026-02-14 04:27:26.327958 | orchestrator | =============================================================================== 2026-02-14 04:27:26.327968 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.13s 2026-02-14 04:27:26.327979 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.02s 2026-02-14 04:27:26.327990 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.56s 2026-02-14 04:27:26.328000 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.40s 2026-02-14 04:27:26.328011 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.19s 2026-02-14 04:27:26.328022 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.15s 2026-02-14 04:27:26.328032 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.83s 2026-02-14 04:27:26.328043 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.68s 2026-02-14 04:27:26.328053 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.57s 2026-02-14 04:27:26.328064 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.51s 2026-02-14 04:27:26.328074 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.42s 2026-02-14 04:27:26.328085 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.34s 2026-02-14 04:27:26.328096 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2026-02-14 04:27:26.328107 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2026-02-14 04:27:26.328118 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.24s 2026-02-14 04:27:26.328183 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-02-14 04:27:28.733929 | orchestrator | 2026-02-14 04:27:28 | INFO  | Task 15478687-046f-4423-9ddc-8f808040f5d0 (prometheus) was prepared for execution. 2026-02-14 04:27:28.734116 | orchestrator | 2026-02-14 04:27:28 | INFO  | It takes a moment until task 15478687-046f-4423-9ddc-8f808040f5d0 (prometheus) has been started and output is visible here. 2026-02-14 04:27:38.220798 | orchestrator | 2026-02-14 04:27:38.220916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:27:38.220928 | orchestrator | 2026-02-14 04:27:38.220933 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:27:38.220937 | orchestrator | Saturday 14 February 2026 04:27:32 +0000 (0:00:00.270) 0:00:00.270 ***** 2026-02-14 04:27:38.220942 | orchestrator | ok: [testbed-manager] 2026-02-14 04:27:38.220946 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:27:38.220951 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:27:38.220955 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:27:38.220959 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:27:38.220963 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:27:38.220967 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:27:38.220971 | orchestrator | 2026-02-14 04:27:38.220975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:27:38.220978 | orchestrator | Saturday 14 February 2026 04:27:33 +0000 (0:00:00.870) 0:00:01.141 ***** 2026-02-14 04:27:38.220983 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-14 04:27:38.220987 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-14 04:27:38.220991 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-14 04:27:38.220995 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-14 04:27:38.220999 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-14 04:27:38.221002 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-14 04:27:38.221006 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-14 04:27:38.221010 | orchestrator | 2026-02-14 04:27:38.221014 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-14 04:27:38.221018 | orchestrator | 2026-02-14 04:27:38.221021 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-14 04:27:38.221025 | orchestrator | Saturday 14 February 2026 04:27:34 +0000 (0:00:00.904) 0:00:02.045 ***** 2026-02-14 04:27:38.221030 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:27:38.221035 | orchestrator | 2026-02-14 04:27:38.221039 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-14 04:27:38.221044 | orchestrator | Saturday 14 February 2026 04:27:36 +0000 (0:00:01.393) 0:00:03.439 ***** 2026-02-14 04:27:38.221053 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 04:27:38.221063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221093 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:38.221179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:38.221183 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221187 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:38.221192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:38.221201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:38.221209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:39.088535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:39.088659 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 04:27:39.088674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:39.088720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:39.088756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088793 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:39.088804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:39.088846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:39.088870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:43.911512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:43.911625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:43.911642 | orchestrator | 2026-02-14 04:27:43.911655 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-14 04:27:43.911667 | orchestrator | Saturday 14 February 2026 04:27:39 +0000 (0:00:02.937) 0:00:06.377 ***** 2026-02-14 04:27:43.911678 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 04:27:43.911690 | orchestrator | 2026-02-14 04:27:43.911700 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-14 04:27:43.911710 | orchestrator | Saturday 14 February 2026 04:27:40 +0000 (0:00:01.657) 0:00:08.034 ***** 2026-02-14 04:27:43.911722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 04:27:43.911757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911852 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:43.911870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:43.911881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:43.911892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:43.911907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:43.911926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031520 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:46.031574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:46.031586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:46.031624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031669 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 04:27:46.031690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:46.031731 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:46.031743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:46.031762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:47.019024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:47.019211 | orchestrator | 2026-02-14 04:27:47.019232 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-14 04:27:47.019245 | orchestrator | Saturday 14 February 2026 04:27:46 +0000 (0:00:05.270) 0:00:13.305 ***** 2026-02-14 04:27:47.019259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-14 04:27:47.019273 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.019286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.019417 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-14 04:27:47.019462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.019483 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:27:47.019496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.019508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.019520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.019532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.019544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.019558 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:27:47.019576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.019590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.019618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.602490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.602616 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:27:47.602631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.602644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.602655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.602685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:47.602730 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:27:47.602760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.602772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602795 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:27:47.602806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.602818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:47.602854 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:27:47.602866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:47.602884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:48.634337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:48.634447 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:27:48.634473 | orchestrator | 2026-02-14 04:27:48.634494 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-14 04:27:48.634515 | orchestrator | Saturday 14 February 2026 04:27:47 +0000 (0:00:01.573) 0:00:14.878 ***** 2026-02-14 04:27:48.634536 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-14 04:27:48.634558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:48.634578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:48.634652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-14 04:27:48.634704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:48.634720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:48.634732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:48.634744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:48.634756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:48.634767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:48.634792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:48.634804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:48.634823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:50.063246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:50.063370 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:27:50.063385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:50.063397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:50.063519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:50.063551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:50.063572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 04:27:50.063612 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:27:50.063630 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:27:50.063649 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:27:50.063667 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:27:50.063686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:50.063730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:50.063771 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:27:50.063785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 04:27:50.063809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 04:27:53.858621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 04:27:53.858725 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:27:53.858737 | orchestrator | 2026-02-14 04:27:53.858746 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-14 04:27:53.858754 | orchestrator | Saturday 14 February 2026 04:27:50 +0000 (0:00:02.445) 0:00:17.323 ***** 2026-02-14 04:27:53.858762 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 04:27:53.858801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:27:53.858878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:53.858890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:53.858900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:53.858907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:53.858914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:53.858926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.424582 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.424712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.424751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.424763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.424787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.424803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.424821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.424859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 04:27:56.424908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.424993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.425011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:27:56.425037 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.425055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.425073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:27:56.425105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:00.556280 | orchestrator | 2026-02-14 04:28:00.556414 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-14 04:28:00.556432 | orchestrator | Saturday 14 February 2026 04:27:56 +0000 (0:00:06.377) 0:00:23.701 ***** 2026-02-14 04:28:00.556444 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 04:28:00.556456 | orchestrator | 2026-02-14 04:28:00.556468 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-14 04:28:00.556479 | orchestrator | Saturday 14 February 2026 04:27:57 +0000 (0:00:00.978) 0:00:24.680 ***** 2026-02-14 04:28:00.556492 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556508 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556535 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556547 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:00.556559 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556571 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556608 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556621 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556633 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556649 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4238813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556661 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556684 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:00.556709 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237793 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237889 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237931 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237944 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.237955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327379, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4296389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:02.237985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238013 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238075 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238104 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238150 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238183 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238202 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:02.238234 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430403 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430499 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430541 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430563 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430623 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430670 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430692 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430710 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430721 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430741 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430752 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327331, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.423028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:03.430764 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:03.430783 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837533 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837635 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837651 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837682 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837695 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837706 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837718 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837745 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837763 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837792 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327365, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4279625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:04.837804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837826 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:04.837844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.045768 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.045897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.045924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.045990 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046089 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046135 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046189 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046216 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046227 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327325, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4212701, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:06.046239 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046271 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046290 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:06.046339 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450405 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450501 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450519 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450532 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450543 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450556 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450600 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450630 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450644 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450656 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450668 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450680 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450692 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:07.450716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327344, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.424281, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:07.450737 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.784992 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785252 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785278 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785324 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785337 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785367 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785380 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785391 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327360, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:08.785422 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785449 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785460 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:08.785481 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800734 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800857 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800892 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800917 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800930 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:17.800943 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800954 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.800980 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:17.800991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.801001 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:17.801012 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.801029 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:17.801040 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.801050 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:17.801060 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-14 04:28:17.801070 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:17.801085 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327347, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4246473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:17.801096 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327337, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4235458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:17.801145 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327378, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4293463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836509 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327316, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4203348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327392, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327376, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4287283, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327328, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4218826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836702 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327321, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4208388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327358, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4263382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836725 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327351, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4255733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836755 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327387, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4314282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-14 04:28:40.836775 | orchestrator | 2026-02-14 04:28:40.836785 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-14 04:28:40.836796 | orchestrator | Saturday 14 February 2026 04:28:21 +0000 (0:00:24.280) 0:00:48.961 ***** 2026-02-14 04:28:40.836805 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 04:28:40.836815 | orchestrator | 2026-02-14 04:28:40.836824 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-14 04:28:40.836833 | orchestrator | Saturday 14 February 2026 04:28:22 +0000 (0:00:00.745) 0:00:49.706 ***** 2026-02-14 04:28:40.836841 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.836853 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836863 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.836873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836882 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.836892 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 04:28:40.836901 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.836910 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836919 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.836927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836936 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.836945 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.836954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836963 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.836971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.836979 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.836988 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.836996 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837005 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.837014 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837023 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.837032 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.837041 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837051 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.837067 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837077 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.837109 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.837122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837132 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.837142 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837151 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.837162 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:40.837173 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837183 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-14 04:28:40.837192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-14 04:28:40.837203 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-14 04:28:40.837212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:28:40.837221 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-14 04:28:40.837231 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-14 04:28:40.837250 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-14 04:28:40.837260 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-14 04:28:40.837270 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-14 04:28:40.837280 | orchestrator | 2026-02-14 04:28:40.837290 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-14 04:28:40.837301 | orchestrator | Saturday 14 February 2026 04:28:24 +0000 (0:00:01.838) 0:00:51.545 ***** 2026-02-14 04:28:40.837310 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:40.837321 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:40.837331 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:40.837341 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:40.837352 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:40.837362 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:40.837384 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:57.477393 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.477493 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:57.477506 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.477514 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-14 04:28:57.477522 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.477530 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-14 04:28:57.477538 | orchestrator | 2026-02-14 04:28:57.477546 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-14 04:28:57.477554 | orchestrator | Saturday 14 February 2026 04:28:40 +0000 (0:00:16.578) 0:01:08.124 ***** 2026-02-14 04:28:57.477561 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477568 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.477576 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477583 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.477590 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477598 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477606 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.477619 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.477631 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477642 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.477654 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-14 04:28:57.477666 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.477678 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-14 04:28:57.477691 | orchestrator | 2026-02-14 04:28:57.477704 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-14 04:28:57.477719 | orchestrator | Saturday 14 February 2026 04:28:43 +0000 (0:00:02.851) 0:01:10.976 ***** 2026-02-14 04:28:57.477728 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477736 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477743 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.477772 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.477780 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477787 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.477794 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477815 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.477822 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-14 04:28:57.477830 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477837 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.477844 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-14 04:28:57.477851 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.477858 | orchestrator | 2026-02-14 04:28:57.477866 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-14 04:28:57.477873 | orchestrator | Saturday 14 February 2026 04:28:45 +0000 (0:00:01.767) 0:01:12.744 ***** 2026-02-14 04:28:57.477880 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 04:28:57.477887 | orchestrator | 2026-02-14 04:28:57.477895 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-14 04:28:57.477903 | orchestrator | Saturday 14 February 2026 04:28:46 +0000 (0:00:00.690) 0:01:13.435 ***** 2026-02-14 04:28:57.477910 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:28:57.477919 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.477927 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.477935 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.477943 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.477951 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.477959 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.477971 | orchestrator | 2026-02-14 04:28:57.477984 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-14 04:28:57.477996 | orchestrator | Saturday 14 February 2026 04:28:46 +0000 (0:00:00.711) 0:01:14.146 ***** 2026-02-14 04:28:57.478009 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:28:57.478079 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.478107 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.478115 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.478123 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:28:57.478132 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:28:57.478140 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:28:57.478149 | orchestrator | 2026-02-14 04:28:57.478157 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-14 04:28:57.478181 | orchestrator | Saturday 14 February 2026 04:28:49 +0000 (0:00:02.229) 0:01:16.375 ***** 2026-02-14 04:28:57.478190 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478198 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478207 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478215 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:28:57.478223 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.478231 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.478240 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478248 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.478256 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478273 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.478281 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478288 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.478295 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-14 04:28:57.478303 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.478310 | orchestrator | 2026-02-14 04:28:57.478320 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-14 04:28:57.478332 | orchestrator | Saturday 14 February 2026 04:28:50 +0000 (0:00:01.401) 0:01:17.777 ***** 2026-02-14 04:28:57.478345 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478357 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.478369 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478381 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.478393 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478407 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.478416 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478423 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.478431 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478438 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.478445 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-14 04:28:57.478453 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-14 04:28:57.478460 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.478467 | orchestrator | 2026-02-14 04:28:57.478474 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-14 04:28:57.478487 | orchestrator | Saturday 14 February 2026 04:28:51 +0000 (0:00:01.384) 0:01:19.162 ***** 2026-02-14 04:28:57.478495 | orchestrator | [WARNING]: Skipped 2026-02-14 04:28:57.478504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-14 04:28:57.478511 | orchestrator | due to this access issue: 2026-02-14 04:28:57.478519 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-14 04:28:57.478526 | orchestrator | not a directory 2026-02-14 04:28:57.478533 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 04:28:57.478541 | orchestrator | 2026-02-14 04:28:57.478548 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-14 04:28:57.478555 | orchestrator | Saturday 14 February 2026 04:28:52 +0000 (0:00:01.099) 0:01:20.262 ***** 2026-02-14 04:28:57.478562 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:28:57.478570 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.478577 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.478585 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.478592 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.478599 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.478606 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.478614 | orchestrator | 2026-02-14 04:28:57.478621 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-14 04:28:57.478628 | orchestrator | Saturday 14 February 2026 04:28:53 +0000 (0:00:00.973) 0:01:21.235 ***** 2026-02-14 04:28:57.478636 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:28:57.478643 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:28:57.478650 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:28:57.478664 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:28:57.478671 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:28:57.478679 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:28:57.478691 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:28:57.478703 | orchestrator | 2026-02-14 04:28:57.478715 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-14 04:28:57.478728 | orchestrator | Saturday 14 February 2026 04:28:54 +0000 (0:00:00.928) 0:01:22.163 ***** 2026-02-14 04:28:57.478752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-14 04:28:59.141564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141645 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:59.141688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:59.141701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-14 04:28:59.141713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:28:59.141726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:59.141743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:28:59.141755 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:28:59.141775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:59.141787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:28:59.141806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-14 04:29:01.053755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:29:01.053773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053801 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:29:01.053805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-14 04:29:01.053811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:29:01.053819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:29:01.053823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 04:29:01.053828 | orchestrator | 2026-02-14 04:29:01.053833 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-14 04:29:01.053838 | orchestrator | Saturday 14 February 2026 04:28:59 +0000 (0:00:04.272) 0:01:26.436 ***** 2026-02-14 04:29:01.053842 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-14 04:29:01.053847 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:29:01.053851 | orchestrator | 2026-02-14 04:29:01.053855 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053859 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:01.210) 0:01:27.646 ***** 2026-02-14 04:29:01.053863 | orchestrator | 2026-02-14 04:29:01.053866 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053870 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.244) 0:01:27.890 ***** 2026-02-14 04:29:01.053874 | orchestrator | 2026-02-14 04:29:01.053878 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053882 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.072) 0:01:27.962 ***** 2026-02-14 04:29:01.053885 | orchestrator | 2026-02-14 04:29:01.053889 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053893 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.069) 0:01:28.032 ***** 2026-02-14 04:29:01.053897 | orchestrator | 2026-02-14 04:29:01.053901 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053904 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.066) 0:01:28.098 ***** 2026-02-14 04:29:01.053908 | orchestrator | 2026-02-14 04:29:01.053912 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053916 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.070) 0:01:28.169 ***** 2026-02-14 04:29:01.053920 | orchestrator | 2026-02-14 04:29:01.053923 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-14 04:29:01.053931 | orchestrator | Saturday 14 February 2026 04:29:00 +0000 (0:00:00.069) 0:01:28.238 ***** 2026-02-14 04:30:41.737936 | orchestrator | 2026-02-14 04:30:41.738297 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-14 04:30:41.738336 | orchestrator | Saturday 14 February 2026 04:29:01 +0000 (0:00:00.094) 0:01:28.332 ***** 2026-02-14 04:30:41.738358 | orchestrator | changed: [testbed-manager] 2026-02-14 04:30:41.738379 | orchestrator | 2026-02-14 04:30:41.738398 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-14 04:30:41.738417 | orchestrator | Saturday 14 February 2026 04:29:22 +0000 (0:00:21.788) 0:01:50.121 ***** 2026-02-14 04:30:41.738436 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:30:41.738457 | orchestrator | changed: [testbed-manager] 2026-02-14 04:30:41.738476 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:30:41.738497 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:30:41.738551 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:30:41.738570 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:30:41.738589 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:30:41.738608 | orchestrator | 2026-02-14 04:30:41.738631 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-14 04:30:41.738650 | orchestrator | Saturday 14 February 2026 04:29:31 +0000 (0:00:08.630) 0:01:58.751 ***** 2026-02-14 04:30:41.738670 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:30:41.738689 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:30:41.738709 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:30:41.738729 | orchestrator | 2026-02-14 04:30:41.738750 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-14 04:30:41.738772 | orchestrator | Saturday 14 February 2026 04:29:42 +0000 (0:00:10.581) 0:02:09.333 ***** 2026-02-14 04:30:41.738793 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:30:41.738813 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:30:41.738834 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:30:41.738855 | orchestrator | 2026-02-14 04:30:41.738876 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-14 04:30:41.738897 | orchestrator | Saturday 14 February 2026 04:29:47 +0000 (0:00:05.707) 0:02:15.040 ***** 2026-02-14 04:30:41.738917 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:30:41.738937 | orchestrator | changed: [testbed-manager] 2026-02-14 04:30:41.738957 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:30:41.738977 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:30:41.738997 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:30:41.739018 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:30:41.739101 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:30:41.739125 | orchestrator | 2026-02-14 04:30:41.739147 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-14 04:30:41.739168 | orchestrator | Saturday 14 February 2026 04:30:01 +0000 (0:00:13.911) 0:02:28.951 ***** 2026-02-14 04:30:41.739189 | orchestrator | changed: [testbed-manager] 2026-02-14 04:30:41.739210 | orchestrator | 2026-02-14 04:30:41.739232 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-14 04:30:41.739254 | orchestrator | Saturday 14 February 2026 04:30:09 +0000 (0:00:08.248) 0:02:37.200 ***** 2026-02-14 04:30:41.739276 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:30:41.739296 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:30:41.739317 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:30:41.739338 | orchestrator | 2026-02-14 04:30:41.739359 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-14 04:30:41.739380 | orchestrator | Saturday 14 February 2026 04:30:20 +0000 (0:00:10.290) 0:02:47.490 ***** 2026-02-14 04:30:41.739401 | orchestrator | changed: [testbed-manager] 2026-02-14 04:30:41.739421 | orchestrator | 2026-02-14 04:30:41.739443 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-14 04:30:41.739463 | orchestrator | Saturday 14 February 2026 04:30:30 +0000 (0:00:10.366) 0:02:57.857 ***** 2026-02-14 04:30:41.739484 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:30:41.739505 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:30:41.739526 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:30:41.739547 | orchestrator | 2026-02-14 04:30:41.739568 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:30:41.739590 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-14 04:30:41.739612 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-14 04:30:41.739634 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-14 04:30:41.739655 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-14 04:30:41.739690 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 04:30:41.739712 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 04:30:41.739734 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-14 04:30:41.739755 | orchestrator | 2026-02-14 04:30:41.739776 | orchestrator | 2026-02-14 04:30:41.739797 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:30:41.739818 | orchestrator | Saturday 14 February 2026 04:30:40 +0000 (0:00:10.295) 0:03:08.152 ***** 2026-02-14 04:30:41.739840 | orchestrator | =============================================================================== 2026-02-14 04:30:41.739861 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.28s 2026-02-14 04:30:41.739919 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.79s 2026-02-14 04:30:41.739942 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.58s 2026-02-14 04:30:41.739962 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.91s 2026-02-14 04:30:41.739983 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.58s 2026-02-14 04:30:41.740004 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.37s 2026-02-14 04:30:41.740025 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.30s 2026-02-14 04:30:41.740083 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.29s 2026-02-14 04:30:41.740101 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.63s 2026-02-14 04:30:41.740121 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.25s 2026-02-14 04:30:41.740140 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.38s 2026-02-14 04:30:41.740157 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.71s 2026-02-14 04:30:41.740173 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.27s 2026-02-14 04:30:41.740189 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.27s 2026-02-14 04:30:41.740205 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.94s 2026-02-14 04:30:41.740220 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.85s 2026-02-14 04:30:41.740235 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.45s 2026-02-14 04:30:41.740251 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.23s 2026-02-14 04:30:41.740268 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.84s 2026-02-14 04:30:41.740286 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.77s 2026-02-14 04:30:44.355137 | orchestrator | 2026-02-14 04:30:44 | INFO  | Task afc74e91-ccb1-4efb-a455-a9eb65bf2456 (grafana) was prepared for execution. 2026-02-14 04:30:44.355269 | orchestrator | 2026-02-14 04:30:44 | INFO  | It takes a moment until task afc74e91-ccb1-4efb-a455-a9eb65bf2456 (grafana) has been started and output is visible here. 2026-02-14 04:30:54.162749 | orchestrator | 2026-02-14 04:30:54.162904 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:30:54.162923 | orchestrator | 2026-02-14 04:30:54.162935 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:30:54.162947 | orchestrator | Saturday 14 February 2026 04:30:48 +0000 (0:00:00.259) 0:00:00.259 ***** 2026-02-14 04:30:54.162959 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:30:54.163002 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:30:54.163014 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:30:54.163054 | orchestrator | 2026-02-14 04:30:54.163066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:30:54.163078 | orchestrator | Saturday 14 February 2026 04:30:48 +0000 (0:00:00.321) 0:00:00.580 ***** 2026-02-14 04:30:54.163089 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-14 04:30:54.163101 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-14 04:30:54.163112 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-14 04:30:54.163123 | orchestrator | 2026-02-14 04:30:54.163134 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-14 04:30:54.163144 | orchestrator | 2026-02-14 04:30:54.163155 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-14 04:30:54.163166 | orchestrator | Saturday 14 February 2026 04:30:49 +0000 (0:00:00.433) 0:00:01.014 ***** 2026-02-14 04:30:54.163178 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:30:54.163190 | orchestrator | 2026-02-14 04:30:54.163201 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-14 04:30:54.163212 | orchestrator | Saturday 14 February 2026 04:30:49 +0000 (0:00:00.537) 0:00:01.551 ***** 2026-02-14 04:30:54.163227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163274 | orchestrator | 2026-02-14 04:30:54.163287 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-14 04:30:54.163300 | orchestrator | Saturday 14 February 2026 04:30:50 +0000 (0:00:00.947) 0:00:02.498 ***** 2026-02-14 04:30:54.163312 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-14 04:30:54.163325 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-14 04:30:54.163338 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:30:54.163360 | orchestrator | 2026-02-14 04:30:54.163373 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-14 04:30:54.163385 | orchestrator | Saturday 14 February 2026 04:30:51 +0000 (0:00:00.869) 0:00:03.367 ***** 2026-02-14 04:30:54.163414 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:30:54.163427 | orchestrator | 2026-02-14 04:30:54.163439 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-14 04:30:54.163452 | orchestrator | Saturday 14 February 2026 04:30:52 +0000 (0:00:00.575) 0:00:03.943 ***** 2026-02-14 04:30:54.163484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:30:54.163540 | orchestrator | 2026-02-14 04:30:54.163558 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-14 04:30:54.163576 | orchestrator | Saturday 14 February 2026 04:30:53 +0000 (0:00:01.321) 0:00:05.264 ***** 2026-02-14 04:30:54.163594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:30:54.163613 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:30:54.163633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:30:54.163664 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:30:54.163708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:31:01.117535 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:31:01.117653 | orchestrator | 2026-02-14 04:31:01.117670 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-14 04:31:01.117684 | orchestrator | Saturday 14 February 2026 04:30:54 +0000 (0:00:00.568) 0:00:05.833 ***** 2026-02-14 04:31:01.117699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:31:01.117715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:31:01.117727 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:31:01.117738 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:31:01.117750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-14 04:31:01.117761 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:31:01.117772 | orchestrator | 2026-02-14 04:31:01.117783 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-14 04:31:01.117794 | orchestrator | Saturday 14 February 2026 04:30:54 +0000 (0:00:00.606) 0:00:06.439 ***** 2026-02-14 04:31:01.117828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117900 | orchestrator | 2026-02-14 04:31:01.117911 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-14 04:31:01.117922 | orchestrator | Saturday 14 February 2026 04:30:55 +0000 (0:00:01.237) 0:00:07.677 ***** 2026-02-14 04:31:01.117934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:31:01.117976 | orchestrator | 2026-02-14 04:31:01.117988 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-14 04:31:01.117999 | orchestrator | Saturday 14 February 2026 04:30:57 +0000 (0:00:01.601) 0:00:09.279 ***** 2026-02-14 04:31:01.118010 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:31:01.118100 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:31:01.118114 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:31:01.118127 | orchestrator | 2026-02-14 04:31:01.118140 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-14 04:31:01.118152 | orchestrator | Saturday 14 February 2026 04:30:57 +0000 (0:00:00.329) 0:00:09.608 ***** 2026-02-14 04:31:01.118166 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-14 04:31:01.118179 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-14 04:31:01.118191 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-14 04:31:01.118217 | orchestrator | 2026-02-14 04:31:01.118229 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-14 04:31:01.118248 | orchestrator | Saturday 14 February 2026 04:30:59 +0000 (0:00:01.244) 0:00:10.853 ***** 2026-02-14 04:31:01.118262 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-14 04:31:01.118275 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-14 04:31:01.118288 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-14 04:31:01.118301 | orchestrator | 2026-02-14 04:31:01.118314 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-14 04:31:01.118335 | orchestrator | Saturday 14 February 2026 04:31:01 +0000 (0:00:01.926) 0:00:12.780 ***** 2026-02-14 04:31:07.834632 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:31:07.834742 | orchestrator | 2026-02-14 04:31:07.834759 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-14 04:31:07.834772 | orchestrator | Saturday 14 February 2026 04:31:01 +0000 (0:00:00.772) 0:00:13.552 ***** 2026-02-14 04:31:07.834783 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-14 04:31:07.834795 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-14 04:31:07.834806 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:31:07.834818 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:31:07.834828 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:31:07.834839 | orchestrator | 2026-02-14 04:31:07.834850 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-14 04:31:07.834862 | orchestrator | Saturday 14 February 2026 04:31:02 +0000 (0:00:00.759) 0:00:14.312 ***** 2026-02-14 04:31:07.834873 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:31:07.834883 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:31:07.834894 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:31:07.834905 | orchestrator | 2026-02-14 04:31:07.834916 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-14 04:31:07.834927 | orchestrator | Saturday 14 February 2026 04:31:02 +0000 (0:00:00.364) 0:00:14.676 ***** 2026-02-14 04:31:07.834942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327118, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.834980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327118, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.834992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327118, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.376419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.376419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327169, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.376419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327133, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327133, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327133, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327171, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3778121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327171, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3778121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:07.835213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327171, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3778121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.438360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3709493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.438548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3709493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.438566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327148, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3709493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.438595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3744807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3744807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327161, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3744807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327115, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3596997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327115, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3596997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327115, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3596997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3633828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:11.439631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327135, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3660867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327135, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3660867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327135, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3660867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327154, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327154, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327154, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327166, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3756526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327166, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3756526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327166, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3756526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327131, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327131, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327131, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3653827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:15.352926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327160, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327151, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327151, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327151, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.372446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327144, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3702915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327144, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3702915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327144, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3702915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3687937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3687937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327141, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3687937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327157, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327157, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:19.749934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327157, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3736422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327137, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3673234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327137, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3673234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327137, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3673234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327164, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3751194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327164, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3751194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327164, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3751194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327296, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.417567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327296, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.417567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327296, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.417567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327199, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327199, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327199, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:23.647749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327185, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3818083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327185, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3818083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327185, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3818083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327226, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3956716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327226, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3956716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327226, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3956716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327177, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327261, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4080372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327261, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4080372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327261, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4080372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327229, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.404652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:27.399933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327229, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.404652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327229, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.404652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327266, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327291, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327291, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327291, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327258, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.406969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327258, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.406969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327258, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.406969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3943334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3943334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:31.855993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327220, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3943334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327195, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3875527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327195, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3875527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327195, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3875527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327217, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327217, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327217, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3934472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3854878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3854878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327190, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.3854878, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.394934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.394934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327224, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.394934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:35.632475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327281, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327281, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327281, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4145136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.411816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.411816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327273, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.411816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327181, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327181, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327181, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.380935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327183, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.38174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327183, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.38174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327183, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.38174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4059305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:31:39.394666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4059305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:33:22.015415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327252, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4059305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:33:22.015558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327270, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:33:22.015577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327270, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:33:22.015590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327270, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771036265.4092658, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-14 04:33:22.015603 | orchestrator | 2026-02-14 04:33:22.015616 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-14 04:33:22.015629 | orchestrator | Saturday 14 February 2026 04:31:40 +0000 (0:00:37.604) 0:00:52.281 ***** 2026-02-14 04:33:22.015641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:33:22.015698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:33:22.015712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-14 04:33:22.015723 | orchestrator | 2026-02-14 04:33:22.015740 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-14 04:33:22.015747 | orchestrator | Saturday 14 February 2026 04:31:41 +0000 (0:00:01.019) 0:00:53.300 ***** 2026-02-14 04:33:22.015754 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:33:22.015761 | orchestrator | 2026-02-14 04:33:22.015768 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-14 04:33:22.015774 | orchestrator | Saturday 14 February 2026 04:31:43 +0000 (0:00:02.337) 0:00:55.638 ***** 2026-02-14 04:33:22.015780 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:33:22.015786 | orchestrator | 2026-02-14 04:33:22.015792 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-14 04:33:22.015799 | orchestrator | Saturday 14 February 2026 04:31:46 +0000 (0:00:02.324) 0:00:57.962 ***** 2026-02-14 04:33:22.015805 | orchestrator | 2026-02-14 04:33:22.015811 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-14 04:33:22.015817 | orchestrator | Saturday 14 February 2026 04:31:46 +0000 (0:00:00.071) 0:00:58.033 ***** 2026-02-14 04:33:22.015823 | orchestrator | 2026-02-14 04:33:22.015829 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-14 04:33:22.015835 | orchestrator | Saturday 14 February 2026 04:31:46 +0000 (0:00:00.070) 0:00:58.104 ***** 2026-02-14 04:33:22.015842 | orchestrator | 2026-02-14 04:33:22.015849 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-14 04:33:22.015855 | orchestrator | Saturday 14 February 2026 04:31:46 +0000 (0:00:00.070) 0:00:58.174 ***** 2026-02-14 04:33:22.015861 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:33:22.015867 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:33:22.015874 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:33:22.015880 | orchestrator | 2026-02-14 04:33:22.015886 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-14 04:33:22.015892 | orchestrator | Saturday 14 February 2026 04:31:53 +0000 (0:00:07.107) 0:01:05.282 ***** 2026-02-14 04:33:22.015904 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:33:22.015912 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:33:22.015919 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-14 04:33:22.015951 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-14 04:33:22.015959 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-14 04:33:22.015966 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-14 04:33:22.015973 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:33:22.015981 | orchestrator | 2026-02-14 04:33:22.015988 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-14 04:33:22.015995 | orchestrator | Saturday 14 February 2026 04:32:44 +0000 (0:00:50.839) 0:01:56.121 ***** 2026-02-14 04:33:22.016002 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:33:22.016010 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:33:22.016017 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:33:22.016024 | orchestrator | 2026-02-14 04:33:22.016031 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-14 04:33:22.016038 | orchestrator | Saturday 14 February 2026 04:33:16 +0000 (0:00:32.262) 0:02:28.384 ***** 2026-02-14 04:33:22.016045 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:33:22.016052 | orchestrator | 2026-02-14 04:33:22.016059 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-14 04:33:22.016066 | orchestrator | Saturday 14 February 2026 04:33:19 +0000 (0:00:02.345) 0:02:30.729 ***** 2026-02-14 04:33:22.016074 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:33:22.016080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:33:22.016086 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:33:22.016092 | orchestrator | 2026-02-14 04:33:22.016098 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-14 04:33:22.016105 | orchestrator | Saturday 14 February 2026 04:33:19 +0000 (0:00:00.330) 0:02:31.060 ***** 2026-02-14 04:33:22.016112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-14 04:33:22.016127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-14 04:33:22.658796 | orchestrator | 2026-02-14 04:33:22.658903 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-14 04:33:22.658977 | orchestrator | Saturday 14 February 2026 04:33:21 +0000 (0:00:02.615) 0:02:33.676 ***** 2026-02-14 04:33:22.658992 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:33:22.659006 | orchestrator | 2026-02-14 04:33:22.659018 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:33:22.659032 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:33:22.659045 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:33:22.659077 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 04:33:22.659089 | orchestrator | 2026-02-14 04:33:22.659101 | orchestrator | 2026-02-14 04:33:22.659113 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:33:22.659147 | orchestrator | Saturday 14 February 2026 04:33:22 +0000 (0:00:00.306) 0:02:33.983 ***** 2026-02-14 04:33:22.659159 | orchestrator | =============================================================================== 2026-02-14 04:33:22.659171 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.84s 2026-02-14 04:33:22.659183 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.60s 2026-02-14 04:33:22.659196 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.26s 2026-02-14 04:33:22.659207 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.11s 2026-02-14 04:33:22.659218 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.62s 2026-02-14 04:33:22.659228 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2026-02-14 04:33:22.659238 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.34s 2026-02-14 04:33:22.659249 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.32s 2026-02-14 04:33:22.659259 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.93s 2026-02-14 04:33:22.659270 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.60s 2026-02-14 04:33:22.659282 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2026-02-14 04:33:22.659293 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-02-14 04:33:22.659305 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2026-02-14 04:33:22.659317 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-02-14 04:33:22.659329 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.95s 2026-02-14 04:33:22.659340 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.87s 2026-02-14 04:33:22.659352 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-02-14 04:33:22.659363 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2026-02-14 04:33:22.659375 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2026-02-14 04:33:22.659386 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.58s 2026-02-14 04:33:22.969670 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-14 04:33:22.978276 | orchestrator | + set -e 2026-02-14 04:33:22.978366 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:33:22.978382 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:33:22.978395 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:33:22.978407 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:33:22.980568 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:33:22.980618 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 04:33:22.980630 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 04:33:22.980641 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 04:33:22.980652 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 04:33:22.980663 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 04:33:22.980675 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 04:33:22.980688 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 04:33:22.980700 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:33:22.980711 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:33:22.980722 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 04:33:22.980733 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 04:33:22.980744 | orchestrator | ++ export ARA=false 2026-02-14 04:33:22.980755 | orchestrator | ++ ARA=false 2026-02-14 04:33:22.980767 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 04:33:22.980778 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 04:33:22.980788 | orchestrator | ++ export TEMPEST=false 2026-02-14 04:33:22.980799 | orchestrator | ++ TEMPEST=false 2026-02-14 04:33:22.980810 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 04:33:22.980821 | orchestrator | ++ IS_ZUUL=true 2026-02-14 04:33:22.980832 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:33:22.980843 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:33:22.980854 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 04:33:22.980892 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 04:33:22.980903 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 04:33:22.980914 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 04:33:22.980959 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 04:33:22.980978 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 04:33:22.980996 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 04:33:22.981014 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 04:33:22.981032 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-14 04:33:23.067566 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 04:33:23.067659 | orchestrator | + osism apply clusterapi 2026-02-14 04:33:25.123304 | orchestrator | 2026-02-14 04:33:25 | INFO  | Task 3b91ea94-936d-4550-b1e0-95cbd11acff8 (clusterapi) was prepared for execution. 2026-02-14 04:33:25.123405 | orchestrator | 2026-02-14 04:33:25 | INFO  | It takes a moment until task 3b91ea94-936d-4550-b1e0-95cbd11acff8 (clusterapi) has been started and output is visible here. 2026-02-14 04:34:19.970587 | orchestrator | 2026-02-14 04:34:19.970706 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-14 04:34:19.970723 | orchestrator | 2026-02-14 04:34:19.970735 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-14 04:34:19.970746 | orchestrator | Saturday 14 February 2026 04:33:29 +0000 (0:00:00.209) 0:00:00.209 ***** 2026-02-14 04:34:19.970758 | orchestrator | included: cert_manager for testbed-manager 2026-02-14 04:34:19.970770 | orchestrator | 2026-02-14 04:34:19.970781 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-14 04:34:19.970792 | orchestrator | Saturday 14 February 2026 04:33:29 +0000 (0:00:00.235) 0:00:00.445 ***** 2026-02-14 04:34:19.970803 | orchestrator | changed: [testbed-manager] 2026-02-14 04:34:19.970815 | orchestrator | 2026-02-14 04:34:19.970826 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-14 04:34:19.970837 | orchestrator | Saturday 14 February 2026 04:33:35 +0000 (0:00:05.531) 0:00:05.976 ***** 2026-02-14 04:34:19.970863 | orchestrator | changed: [testbed-manager] 2026-02-14 04:34:19.970952 | orchestrator | 2026-02-14 04:34:19.970965 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-14 04:34:19.970976 | orchestrator | 2026-02-14 04:34:19.971018 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-14 04:34:19.971048 | orchestrator | Saturday 14 February 2026 04:33:59 +0000 (0:00:23.897) 0:00:29.874 ***** 2026-02-14 04:34:19.971068 | orchestrator | ok: [testbed-manager] 2026-02-14 04:34:19.971086 | orchestrator | 2026-02-14 04:34:19.971104 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-14 04:34:19.971121 | orchestrator | Saturday 14 February 2026 04:34:00 +0000 (0:00:01.086) 0:00:30.960 ***** 2026-02-14 04:34:19.971139 | orchestrator | ok: [testbed-manager] 2026-02-14 04:34:19.971159 | orchestrator | 2026-02-14 04:34:19.971179 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-14 04:34:19.971200 | orchestrator | Saturday 14 February 2026 04:34:00 +0000 (0:00:00.147) 0:00:31.107 ***** 2026-02-14 04:34:19.971222 | orchestrator | ok: [testbed-manager] 2026-02-14 04:34:19.971241 | orchestrator | 2026-02-14 04:34:19.971258 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-14 04:34:19.971271 | orchestrator | Saturday 14 February 2026 04:34:17 +0000 (0:00:16.735) 0:00:47.843 ***** 2026-02-14 04:34:19.971283 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:34:19.971296 | orchestrator | 2026-02-14 04:34:19.971309 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-14 04:34:19.971321 | orchestrator | Saturday 14 February 2026 04:34:17 +0000 (0:00:00.157) 0:00:48.000 ***** 2026-02-14 04:34:19.971334 | orchestrator | changed: [testbed-manager] 2026-02-14 04:34:19.971347 | orchestrator | 2026-02-14 04:34:19.971359 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:34:19.971373 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 04:34:19.971415 | orchestrator | 2026-02-14 04:34:19.971428 | orchestrator | 2026-02-14 04:34:19.971441 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:34:19.971454 | orchestrator | Saturday 14 February 2026 04:34:19 +0000 (0:00:02.265) 0:00:50.266 ***** 2026-02-14 04:34:19.971466 | orchestrator | =============================================================================== 2026-02-14 04:34:19.971477 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.90s 2026-02-14 04:34:19.971488 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.74s 2026-02-14 04:34:19.971498 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.53s 2026-02-14 04:34:19.971509 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.27s 2026-02-14 04:34:19.971520 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.09s 2026-02-14 04:34:19.971531 | orchestrator | Include cert_manager role ----------------------------------------------- 0.24s 2026-02-14 04:34:19.971541 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-02-14 04:34:19.971553 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-02-14 04:34:20.307619 | orchestrator | + osism apply magnum 2026-02-14 04:34:22.355312 | orchestrator | 2026-02-14 04:34:22 | INFO  | Task e0565ce3-69fb-4874-bfe7-a3c6f531dfeb (magnum) was prepared for execution. 2026-02-14 04:34:22.355384 | orchestrator | 2026-02-14 04:34:22 | INFO  | It takes a moment until task e0565ce3-69fb-4874-bfe7-a3c6f531dfeb (magnum) has been started and output is visible here. 2026-02-14 04:35:04.861316 | orchestrator | 2026-02-14 04:35:04.861439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:35:04.861458 | orchestrator | 2026-02-14 04:35:04.861471 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:35:04.861483 | orchestrator | Saturday 14 February 2026 04:34:26 +0000 (0:00:00.284) 0:00:00.284 ***** 2026-02-14 04:35:04.861495 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:35:04.861507 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:35:04.861517 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:35:04.861528 | orchestrator | 2026-02-14 04:35:04.861539 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:35:04.861550 | orchestrator | Saturday 14 February 2026 04:34:26 +0000 (0:00:00.338) 0:00:00.623 ***** 2026-02-14 04:35:04.861561 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-14 04:35:04.861572 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-14 04:35:04.861583 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-14 04:35:04.861594 | orchestrator | 2026-02-14 04:35:04.861605 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-14 04:35:04.861616 | orchestrator | 2026-02-14 04:35:04.861626 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-14 04:35:04.861637 | orchestrator | Saturday 14 February 2026 04:34:27 +0000 (0:00:00.453) 0:00:01.077 ***** 2026-02-14 04:35:04.861648 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:35:04.861660 | orchestrator | 2026-02-14 04:35:04.861670 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-14 04:35:04.861681 | orchestrator | Saturday 14 February 2026 04:34:27 +0000 (0:00:00.586) 0:00:01.663 ***** 2026-02-14 04:35:04.861693 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-14 04:35:04.861704 | orchestrator | 2026-02-14 04:35:04.861714 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-14 04:35:04.861725 | orchestrator | Saturday 14 February 2026 04:34:31 +0000 (0:00:03.684) 0:00:05.348 ***** 2026-02-14 04:35:04.861736 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-14 04:35:04.861747 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-14 04:35:04.861784 | orchestrator | 2026-02-14 04:35:04.861810 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-14 04:35:04.861822 | orchestrator | Saturday 14 February 2026 04:34:38 +0000 (0:00:06.546) 0:00:11.894 ***** 2026-02-14 04:35:04.861862 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-14 04:35:04.861877 | orchestrator | 2026-02-14 04:35:04.861889 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-14 04:35:04.861902 | orchestrator | Saturday 14 February 2026 04:34:41 +0000 (0:00:03.317) 0:00:15.212 ***** 2026-02-14 04:35:04.861914 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-14 04:35:04.861927 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-14 04:35:04.861940 | orchestrator | 2026-02-14 04:35:04.861952 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-14 04:35:04.861962 | orchestrator | Saturday 14 February 2026 04:34:45 +0000 (0:00:03.874) 0:00:19.086 ***** 2026-02-14 04:35:04.861973 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-14 04:35:04.861984 | orchestrator | 2026-02-14 04:35:04.861995 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-14 04:35:04.862006 | orchestrator | Saturday 14 February 2026 04:34:48 +0000 (0:00:03.276) 0:00:22.363 ***** 2026-02-14 04:35:04.862072 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-14 04:35:04.862085 | orchestrator | 2026-02-14 04:35:04.862097 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-14 04:35:04.862107 | orchestrator | Saturday 14 February 2026 04:34:52 +0000 (0:00:03.777) 0:00:26.140 ***** 2026-02-14 04:35:04.862119 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:35:04.862130 | orchestrator | 2026-02-14 04:35:04.862141 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-14 04:35:04.862152 | orchestrator | Saturday 14 February 2026 04:34:55 +0000 (0:00:03.338) 0:00:29.479 ***** 2026-02-14 04:35:04.862163 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:35:04.862174 | orchestrator | 2026-02-14 04:35:04.862193 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-14 04:35:04.862204 | orchestrator | Saturday 14 February 2026 04:34:59 +0000 (0:00:04.048) 0:00:33.528 ***** 2026-02-14 04:35:04.862216 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:35:04.862227 | orchestrator | 2026-02-14 04:35:04.862238 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-14 04:35:04.862249 | orchestrator | Saturday 14 February 2026 04:35:03 +0000 (0:00:03.446) 0:00:36.975 ***** 2026-02-14 04:35:04.862284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:04.862301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:04.862329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:04.862342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:04.862354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:04.862374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:11.980138 | orchestrator | 2026-02-14 04:35:11.980249 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-14 04:35:11.980266 | orchestrator | Saturday 14 February 2026 04:35:04 +0000 (0:00:01.603) 0:00:38.578 ***** 2026-02-14 04:35:11.980278 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:35:11.980316 | orchestrator | 2026-02-14 04:35:11.980328 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-14 04:35:11.980339 | orchestrator | Saturday 14 February 2026 04:35:05 +0000 (0:00:00.176) 0:00:38.754 ***** 2026-02-14 04:35:11.980350 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:35:11.980361 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:35:11.980372 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:35:11.980382 | orchestrator | 2026-02-14 04:35:11.980393 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-14 04:35:11.980404 | orchestrator | Saturday 14 February 2026 04:35:05 +0000 (0:00:00.321) 0:00:39.076 ***** 2026-02-14 04:35:11.980415 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 04:35:11.980426 | orchestrator | 2026-02-14 04:35:11.980437 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-14 04:35:11.980448 | orchestrator | Saturday 14 February 2026 04:35:06 +0000 (0:00:00.821) 0:00:39.898 ***** 2026-02-14 04:35:11.980477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:11.980493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:11.980506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:11.980537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:11.980558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:11.980575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:11.980586 | orchestrator | 2026-02-14 04:35:11.980598 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-14 04:35:11.980609 | orchestrator | Saturday 14 February 2026 04:35:08 +0000 (0:00:02.304) 0:00:42.203 ***** 2026-02-14 04:35:11.980620 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:35:11.980632 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:35:11.980643 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:35:11.980653 | orchestrator | 2026-02-14 04:35:11.980664 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-14 04:35:11.980677 | orchestrator | Saturday 14 February 2026 04:35:08 +0000 (0:00:00.400) 0:00:42.603 ***** 2026-02-14 04:35:11.980690 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:35:11.980702 | orchestrator | 2026-02-14 04:35:11.980715 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-14 04:35:11.980726 | orchestrator | Saturday 14 February 2026 04:35:09 +0000 (0:00:00.512) 0:00:43.116 ***** 2026-02-14 04:35:11.980739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:11.980769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:12.852216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:12.852339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:12.852357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:12.852370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:12.852404 | orchestrator | 2026-02-14 04:35:12.852419 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-14 04:35:12.852431 | orchestrator | Saturday 14 February 2026 04:35:11 +0000 (0:00:02.588) 0:00:45.705 ***** 2026-02-14 04:35:12.852462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:12.852475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:12.852486 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:35:12.852505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:12.852517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:12.852528 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:35:12.852540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:12.852568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:16.462291 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:35:16.462404 | orchestrator | 2026-02-14 04:35:16.462420 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-14 04:35:16.462433 | orchestrator | Saturday 14 February 2026 04:35:12 +0000 (0:00:00.872) 0:00:46.577 ***** 2026-02-14 04:35:16.462464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:16.462481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:16.462494 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:35:16.462505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:16.462540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:16.462551 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:35:16.462581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:16.462599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:16.462611 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:35:16.462622 | orchestrator | 2026-02-14 04:35:16.462634 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-14 04:35:16.462645 | orchestrator | Saturday 14 February 2026 04:35:13 +0000 (0:00:00.901) 0:00:47.479 ***** 2026-02-14 04:35:16.462657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:16.462678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:16.462698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:22.512380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512562 | orchestrator | 2026-02-14 04:35:22.512577 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-14 04:35:22.512590 | orchestrator | Saturday 14 February 2026 04:35:16 +0000 (0:00:02.713) 0:00:50.192 ***** 2026-02-14 04:35:22.512601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:22.512632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:22.512650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:22.512662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:22.512704 | orchestrator | 2026-02-14 04:35:22.512715 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-14 04:35:22.512727 | orchestrator | Saturday 14 February 2026 04:35:21 +0000 (0:00:05.397) 0:00:55.589 ***** 2026-02-14 04:35:22.512747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:24.393649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:24.393773 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:35:24.393792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:24.393886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:24.393900 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:35:24.393912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-14 04:35:24.393945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 04:35:24.393958 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:35:24.393969 | orchestrator | 2026-02-14 04:35:24.393981 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-14 04:35:24.393993 | orchestrator | Saturday 14 February 2026 04:35:22 +0000 (0:00:00.653) 0:00:56.243 ***** 2026-02-14 04:35:24.394012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:24.394093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:24.394105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-14 04:35:24.394120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:35:24.394142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:36:18.419236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-14 04:36:18.419341 | orchestrator | 2026-02-14 04:36:18.419349 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-14 04:36:18.419355 | orchestrator | Saturday 14 February 2026 04:35:24 +0000 (0:00:01.875) 0:00:58.119 ***** 2026-02-14 04:36:18.419360 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:36:18.419366 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:36:18.419370 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:36:18.419375 | orchestrator | 2026-02-14 04:36:18.419379 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-14 04:36:18.419384 | orchestrator | Saturday 14 February 2026 04:35:24 +0000 (0:00:00.540) 0:00:58.659 ***** 2026-02-14 04:36:18.419388 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:36:18.419392 | orchestrator | 2026-02-14 04:36:18.419397 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-14 04:36:18.419401 | orchestrator | Saturday 14 February 2026 04:35:27 +0000 (0:00:02.127) 0:01:00.787 ***** 2026-02-14 04:36:18.419405 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:36:18.419409 | orchestrator | 2026-02-14 04:36:18.419414 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-14 04:36:18.419418 | orchestrator | Saturday 14 February 2026 04:35:29 +0000 (0:00:02.275) 0:01:03.063 ***** 2026-02-14 04:36:18.419422 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:36:18.419426 | orchestrator | 2026-02-14 04:36:18.419431 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-14 04:36:18.419435 | orchestrator | Saturday 14 February 2026 04:35:45 +0000 (0:00:16.645) 0:01:19.708 ***** 2026-02-14 04:36:18.419439 | orchestrator | 2026-02-14 04:36:18.419443 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-14 04:36:18.419448 | orchestrator | Saturday 14 February 2026 04:35:46 +0000 (0:00:00.072) 0:01:19.780 ***** 2026-02-14 04:36:18.419452 | orchestrator | 2026-02-14 04:36:18.419456 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-14 04:36:18.419461 | orchestrator | Saturday 14 February 2026 04:35:46 +0000 (0:00:00.083) 0:01:19.864 ***** 2026-02-14 04:36:18.419465 | orchestrator | 2026-02-14 04:36:18.419469 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-14 04:36:18.419474 | orchestrator | Saturday 14 February 2026 04:35:46 +0000 (0:00:00.073) 0:01:19.938 ***** 2026-02-14 04:36:18.419478 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:36:18.419482 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:36:18.419486 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:36:18.419490 | orchestrator | 2026-02-14 04:36:18.419495 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-14 04:36:18.419499 | orchestrator | Saturday 14 February 2026 04:36:01 +0000 (0:00:15.748) 0:01:35.687 ***** 2026-02-14 04:36:18.419503 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:36:18.419507 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:36:18.419512 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:36:18.419516 | orchestrator | 2026-02-14 04:36:18.419520 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:36:18.419525 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 04:36:18.419531 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:36:18.419541 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-14 04:36:18.419545 | orchestrator | 2026-02-14 04:36:18.419549 | orchestrator | 2026-02-14 04:36:18.419554 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:36:18.419558 | orchestrator | Saturday 14 February 2026 04:36:18 +0000 (0:00:16.114) 0:01:51.801 ***** 2026-02-14 04:36:18.419562 | orchestrator | =============================================================================== 2026-02-14 04:36:18.419567 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.65s 2026-02-14 04:36:18.419571 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.11s 2026-02-14 04:36:18.419576 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.75s 2026-02-14 04:36:18.419580 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.55s 2026-02-14 04:36:18.419584 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.40s 2026-02-14 04:36:18.419588 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.05s 2026-02-14 04:36:18.419593 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.87s 2026-02-14 04:36:18.419608 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.78s 2026-02-14 04:36:18.419612 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.68s 2026-02-14 04:36:18.419617 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.45s 2026-02-14 04:36:18.419621 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.34s 2026-02-14 04:36:18.419628 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.32s 2026-02-14 04:36:18.419633 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.28s 2026-02-14 04:36:18.419637 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.71s 2026-02-14 04:36:18.419641 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.59s 2026-02-14 04:36:18.419646 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.30s 2026-02-14 04:36:18.419650 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.28s 2026-02-14 04:36:18.419654 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.13s 2026-02-14 04:36:18.419658 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.88s 2026-02-14 04:36:18.419663 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.60s 2026-02-14 04:36:19.207515 | orchestrator | ok: Runtime: 1:41:38.199402 2026-02-14 04:36:19.488591 | 2026-02-14 04:36:19.488751 | TASK [Deploy in a nutshell] 2026-02-14 04:36:20.023901 | orchestrator | skipping: Conditional result was False 2026-02-14 04:36:20.052531 | 2026-02-14 04:36:20.052726 | TASK [Bootstrap services] 2026-02-14 04:36:20.741886 | orchestrator | 2026-02-14 04:36:20.742104 | orchestrator | # BOOTSTRAP 2026-02-14 04:36:20.742131 | orchestrator | 2026-02-14 04:36:20.742146 | orchestrator | + set -e 2026-02-14 04:36:20.742160 | orchestrator | + echo 2026-02-14 04:36:20.742173 | orchestrator | + echo '# BOOTSTRAP' 2026-02-14 04:36:20.742191 | orchestrator | + echo 2026-02-14 04:36:20.742239 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-14 04:36:20.750996 | orchestrator | + set -e 2026-02-14 04:36:20.751038 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-14 04:36:22.947865 | orchestrator | 2026-02-14 04:36:22 | INFO  | It takes a moment until task 58cee984-5195-448b-b6c9-fec483957fe8 (flavor-manager) has been started and output is visible here. 2026-02-14 04:36:31.022742 | orchestrator | 2026-02-14 04:36:26 | INFO  | Flavor SCS-1L-1 created 2026-02-14 04:36:31.022935 | orchestrator | 2026-02-14 04:36:26 | INFO  | Flavor SCS-1L-1-5 created 2026-02-14 04:36:31.023734 | orchestrator | 2026-02-14 04:36:26 | INFO  | Flavor SCS-1V-2 created 2026-02-14 04:36:31.023825 | orchestrator | 2026-02-14 04:36:26 | INFO  | Flavor SCS-1V-2-5 created 2026-02-14 04:36:31.023840 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-1V-4 created 2026-02-14 04:36:31.023850 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-1V-4-10 created 2026-02-14 04:36:31.023861 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-1V-8 created 2026-02-14 04:36:31.023873 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-1V-8-20 created 2026-02-14 04:36:31.023893 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-2V-4 created 2026-02-14 04:36:31.023904 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-2V-4-10 created 2026-02-14 04:36:31.023914 | orchestrator | 2026-02-14 04:36:27 | INFO  | Flavor SCS-2V-8 created 2026-02-14 04:36:31.023924 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-2V-8-20 created 2026-02-14 04:36:31.023934 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-2V-16 created 2026-02-14 04:36:31.023944 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-2V-16-50 created 2026-02-14 04:36:31.023953 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-4V-8 created 2026-02-14 04:36:31.023963 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-4V-8-20 created 2026-02-14 04:36:31.023973 | orchestrator | 2026-02-14 04:36:28 | INFO  | Flavor SCS-4V-16 created 2026-02-14 04:36:31.023983 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-4V-16-50 created 2026-02-14 04:36:31.023993 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-4V-32 created 2026-02-14 04:36:31.024003 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-4V-32-100 created 2026-02-14 04:36:31.024013 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-8V-16 created 2026-02-14 04:36:31.024023 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-8V-16-50 created 2026-02-14 04:36:31.024034 | orchestrator | 2026-02-14 04:36:29 | INFO  | Flavor SCS-8V-32 created 2026-02-14 04:36:31.024044 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-8V-32-100 created 2026-02-14 04:36:31.024053 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-16V-32 created 2026-02-14 04:36:31.024063 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-16V-32-100 created 2026-02-14 04:36:31.024073 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-2V-4-20s created 2026-02-14 04:36:31.024083 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-4V-8-50s created 2026-02-14 04:36:31.024093 | orchestrator | 2026-02-14 04:36:30 | INFO  | Flavor SCS-8V-32-100s created 2026-02-14 04:36:33.310838 | orchestrator | 2026-02-14 04:36:33 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-14 04:36:43.499303 | orchestrator | 2026-02-14 04:36:43 | INFO  | Task 7ca50a6c-f837-4916-8f66-a669dd111b0b (bootstrap-basic) was prepared for execution. 2026-02-14 04:36:43.499447 | orchestrator | 2026-02-14 04:36:43 | INFO  | It takes a moment until task 7ca50a6c-f837-4916-8f66-a669dd111b0b (bootstrap-basic) has been started and output is visible here. 2026-02-14 04:37:26.776472 | orchestrator | 2026-02-14 04:37:26.776555 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-14 04:37:26.776563 | orchestrator | 2026-02-14 04:37:26.776567 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 04:37:26.776572 | orchestrator | Saturday 14 February 2026 04:36:47 +0000 (0:00:00.073) 0:00:00.073 ***** 2026-02-14 04:37:26.776577 | orchestrator | ok: [localhost] 2026-02-14 04:37:26.776581 | orchestrator | 2026-02-14 04:37:26.776585 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-14 04:37:26.776589 | orchestrator | Saturday 14 February 2026 04:36:49 +0000 (0:00:01.852) 0:00:01.926 ***** 2026-02-14 04:37:26.776593 | orchestrator | ok: [localhost] 2026-02-14 04:37:26.776597 | orchestrator | 2026-02-14 04:37:26.776601 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-14 04:37:26.776605 | orchestrator | Saturday 14 February 2026 04:36:56 +0000 (0:00:06.435) 0:00:08.362 ***** 2026-02-14 04:37:26.776609 | orchestrator | changed: [localhost] 2026-02-14 04:37:26.776613 | orchestrator | 2026-02-14 04:37:26.776617 | orchestrator | TASK [Create public network] *************************************************** 2026-02-14 04:37:26.776621 | orchestrator | Saturday 14 February 2026 04:37:02 +0000 (0:00:06.497) 0:00:14.859 ***** 2026-02-14 04:37:26.776624 | orchestrator | changed: [localhost] 2026-02-14 04:37:26.776628 | orchestrator | 2026-02-14 04:37:26.776632 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-14 04:37:26.776636 | orchestrator | Saturday 14 February 2026 04:37:08 +0000 (0:00:05.686) 0:00:20.546 ***** 2026-02-14 04:37:26.776643 | orchestrator | changed: [localhost] 2026-02-14 04:37:26.776647 | orchestrator | 2026-02-14 04:37:26.776651 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-14 04:37:26.776655 | orchestrator | Saturday 14 February 2026 04:37:14 +0000 (0:00:06.476) 0:00:27.023 ***** 2026-02-14 04:37:26.776658 | orchestrator | changed: [localhost] 2026-02-14 04:37:26.776662 | orchestrator | 2026-02-14 04:37:26.776666 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-14 04:37:26.776670 | orchestrator | Saturday 14 February 2026 04:37:18 +0000 (0:00:04.245) 0:00:31.268 ***** 2026-02-14 04:37:26.776674 | orchestrator | changed: [localhost] 2026-02-14 04:37:26.776677 | orchestrator | 2026-02-14 04:37:26.776681 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-14 04:37:26.776691 | orchestrator | Saturday 14 February 2026 04:37:22 +0000 (0:00:03.933) 0:00:35.202 ***** 2026-02-14 04:37:26.776695 | orchestrator | ok: [localhost] 2026-02-14 04:37:26.776699 | orchestrator | 2026-02-14 04:37:26.776703 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:37:26.776707 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 04:37:26.776711 | orchestrator | 2026-02-14 04:37:26.776715 | orchestrator | 2026-02-14 04:37:26.776719 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:37:26.776748 | orchestrator | Saturday 14 February 2026 04:37:26 +0000 (0:00:03.533) 0:00:38.736 ***** 2026-02-14 04:37:26.776753 | orchestrator | =============================================================================== 2026-02-14 04:37:26.776756 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.50s 2026-02-14 04:37:26.776760 | orchestrator | Set public network to default ------------------------------------------- 6.48s 2026-02-14 04:37:26.776764 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.44s 2026-02-14 04:37:26.776768 | orchestrator | Create public network --------------------------------------------------- 5.69s 2026-02-14 04:37:26.776787 | orchestrator | Create public subnet ---------------------------------------------------- 4.25s 2026-02-14 04:37:26.776791 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.93s 2026-02-14 04:37:26.776795 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2026-02-14 04:37:26.776799 | orchestrator | Gathering Facts --------------------------------------------------------- 1.85s 2026-02-14 04:37:29.250100 | orchestrator | 2026-02-14 04:37:29 | INFO  | It takes a moment until task a40a6cde-184f-4c04-9c75-cff49ef6074d (image-manager) has been started and output is visible here. 2026-02-14 04:38:13.260502 | orchestrator | 2026-02-14 04:37:31 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-14 04:38:13.260622 | orchestrator | 2026-02-14 04:37:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-14 04:38:13.260641 | orchestrator | 2026-02-14 04:37:32 | INFO  | Importing image Cirros 0.6.2 2026-02-14 04:38:13.260653 | orchestrator | 2026-02-14 04:37:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-14 04:38:13.260665 | orchestrator | 2026-02-14 04:37:34 | INFO  | Waiting for image to leave queued state... 2026-02-14 04:38:13.260678 | orchestrator | 2026-02-14 04:37:36 | INFO  | Waiting for import to complete... 2026-02-14 04:38:13.260689 | orchestrator | 2026-02-14 04:37:46 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-14 04:38:13.260730 | orchestrator | 2026-02-14 04:37:46 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-14 04:38:13.260741 | orchestrator | 2026-02-14 04:37:46 | INFO  | Setting internal_version = 0.6.2 2026-02-14 04:38:13.260752 | orchestrator | 2026-02-14 04:37:46 | INFO  | Setting image_original_user = cirros 2026-02-14 04:38:13.260764 | orchestrator | 2026-02-14 04:37:46 | INFO  | Adding tag os:cirros 2026-02-14 04:38:13.260774 | orchestrator | 2026-02-14 04:37:47 | INFO  | Setting property architecture: x86_64 2026-02-14 04:38:13.260785 | orchestrator | 2026-02-14 04:37:47 | INFO  | Setting property hw_disk_bus: scsi 2026-02-14 04:38:13.260796 | orchestrator | 2026-02-14 04:37:47 | INFO  | Setting property hw_rng_model: virtio 2026-02-14 04:38:13.260807 | orchestrator | 2026-02-14 04:37:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-14 04:38:13.260818 | orchestrator | 2026-02-14 04:37:48 | INFO  | Setting property hw_watchdog_action: reset 2026-02-14 04:38:13.260830 | orchestrator | 2026-02-14 04:37:48 | INFO  | Setting property hypervisor_type: qemu 2026-02-14 04:38:13.260841 | orchestrator | 2026-02-14 04:37:48 | INFO  | Setting property os_distro: cirros 2026-02-14 04:38:13.260852 | orchestrator | 2026-02-14 04:37:49 | INFO  | Setting property os_purpose: minimal 2026-02-14 04:38:13.260862 | orchestrator | 2026-02-14 04:37:49 | INFO  | Setting property replace_frequency: never 2026-02-14 04:38:13.260874 | orchestrator | 2026-02-14 04:37:49 | INFO  | Setting property uuid_validity: none 2026-02-14 04:38:13.260884 | orchestrator | 2026-02-14 04:37:50 | INFO  | Setting property provided_until: none 2026-02-14 04:38:13.260895 | orchestrator | 2026-02-14 04:37:50 | INFO  | Setting property image_description: Cirros 2026-02-14 04:38:13.260906 | orchestrator | 2026-02-14 04:37:50 | INFO  | Setting property image_name: Cirros 2026-02-14 04:38:13.260917 | orchestrator | 2026-02-14 04:37:50 | INFO  | Setting property internal_version: 0.6.2 2026-02-14 04:38:13.260927 | orchestrator | 2026-02-14 04:37:50 | INFO  | Setting property image_original_user: cirros 2026-02-14 04:38:13.260963 | orchestrator | 2026-02-14 04:37:51 | INFO  | Setting property os_version: 0.6.2 2026-02-14 04:38:13.260984 | orchestrator | 2026-02-14 04:37:51 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-14 04:38:13.260997 | orchestrator | 2026-02-14 04:37:51 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-14 04:38:13.261008 | orchestrator | 2026-02-14 04:37:52 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-14 04:38:13.261019 | orchestrator | 2026-02-14 04:37:52 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-14 04:38:13.261030 | orchestrator | 2026-02-14 04:37:52 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-14 04:38:13.261041 | orchestrator | 2026-02-14 04:37:52 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-14 04:38:13.261056 | orchestrator | 2026-02-14 04:37:52 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-14 04:38:13.261068 | orchestrator | 2026-02-14 04:37:52 | INFO  | Importing image Cirros 0.6.3 2026-02-14 04:38:13.261079 | orchestrator | 2026-02-14 04:37:52 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-14 04:38:13.261090 | orchestrator | 2026-02-14 04:37:54 | INFO  | Waiting for image to leave queued state... 2026-02-14 04:38:13.261101 | orchestrator | 2026-02-14 04:37:56 | INFO  | Waiting for import to complete... 2026-02-14 04:38:13.261130 | orchestrator | 2026-02-14 04:38:06 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-14 04:38:13.261143 | orchestrator | 2026-02-14 04:38:06 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-14 04:38:13.261153 | orchestrator | 2026-02-14 04:38:06 | INFO  | Setting internal_version = 0.6.3 2026-02-14 04:38:13.261164 | orchestrator | 2026-02-14 04:38:06 | INFO  | Setting image_original_user = cirros 2026-02-14 04:38:13.261175 | orchestrator | 2026-02-14 04:38:06 | INFO  | Adding tag os:cirros 2026-02-14 04:38:13.261186 | orchestrator | 2026-02-14 04:38:07 | INFO  | Setting property architecture: x86_64 2026-02-14 04:38:13.261197 | orchestrator | 2026-02-14 04:38:07 | INFO  | Setting property hw_disk_bus: scsi 2026-02-14 04:38:13.261208 | orchestrator | 2026-02-14 04:38:07 | INFO  | Setting property hw_rng_model: virtio 2026-02-14 04:38:13.261218 | orchestrator | 2026-02-14 04:38:07 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-14 04:38:13.261229 | orchestrator | 2026-02-14 04:38:08 | INFO  | Setting property hw_watchdog_action: reset 2026-02-14 04:38:13.261240 | orchestrator | 2026-02-14 04:38:08 | INFO  | Setting property hypervisor_type: qemu 2026-02-14 04:38:13.261251 | orchestrator | 2026-02-14 04:38:08 | INFO  | Setting property os_distro: cirros 2026-02-14 04:38:13.261262 | orchestrator | 2026-02-14 04:38:09 | INFO  | Setting property os_purpose: minimal 2026-02-14 04:38:13.261273 | orchestrator | 2026-02-14 04:38:09 | INFO  | Setting property replace_frequency: never 2026-02-14 04:38:13.261284 | orchestrator | 2026-02-14 04:38:09 | INFO  | Setting property uuid_validity: none 2026-02-14 04:38:13.261295 | orchestrator | 2026-02-14 04:38:09 | INFO  | Setting property provided_until: none 2026-02-14 04:38:13.261306 | orchestrator | 2026-02-14 04:38:10 | INFO  | Setting property image_description: Cirros 2026-02-14 04:38:13.261317 | orchestrator | 2026-02-14 04:38:10 | INFO  | Setting property image_name: Cirros 2026-02-14 04:38:13.261327 | orchestrator | 2026-02-14 04:38:10 | INFO  | Setting property internal_version: 0.6.3 2026-02-14 04:38:13.261347 | orchestrator | 2026-02-14 04:38:11 | INFO  | Setting property image_original_user: cirros 2026-02-14 04:38:13.261358 | orchestrator | 2026-02-14 04:38:11 | INFO  | Setting property os_version: 0.6.3 2026-02-14 04:38:13.261369 | orchestrator | 2026-02-14 04:38:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-14 04:38:13.261380 | orchestrator | 2026-02-14 04:38:11 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-14 04:38:13.261391 | orchestrator | 2026-02-14 04:38:12 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-14 04:38:13.261402 | orchestrator | 2026-02-14 04:38:12 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-14 04:38:13.261413 | orchestrator | 2026-02-14 04:38:12 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-14 04:38:13.583631 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-14 04:38:19.885902 | orchestrator | 2026-02-14 04:38:19 | INFO  | date: 2026-02-14 2026-02-14 04:38:19.886085 | orchestrator | 2026-02-14 04:38:19 | INFO  | image: octavia-amphora-haproxy-2024.2.20260214.qcow2 2026-02-14 04:38:19.886130 | orchestrator | 2026-02-14 04:38:19 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260214.qcow2 2026-02-14 04:38:19.886146 | orchestrator | 2026-02-14 04:38:19 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260214.qcow2.CHECKSUM 2026-02-14 04:38:20.048830 | orchestrator | 2026-02-14 04:38:20 | INFO  | checksum: 664b7dbe27097443e406a4efe3d6f2510f7f970d25169f29f7dcf687b232992b 2026-02-14 04:38:20.135625 | orchestrator | 2026-02-14 04:38:20 | INFO  | It takes a moment until task 24fc41ab-4da6-4eb2-9f63-4053ebb5b05b (image-manager) has been started and output is visible here. 2026-02-14 04:39:42.953288 | orchestrator | 2026-02-14 04:38:22 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-14' 2026-02-14 04:39:42.953437 | orchestrator | 2026-02-14 04:38:22 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260214.qcow2: 200 2026-02-14 04:39:42.953467 | orchestrator | 2026-02-14 04:38:22 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-14 2026-02-14 04:39:42.953488 | orchestrator | 2026-02-14 04:38:22 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260214.qcow2 2026-02-14 04:39:42.953509 | orchestrator | 2026-02-14 04:38:24 | INFO  | Waiting for image to leave queued state... 2026-02-14 04:39:42.953529 | orchestrator | 2026-02-14 04:38:26 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953544 | orchestrator | 2026-02-14 04:38:36 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953554 | orchestrator | 2026-02-14 04:38:46 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953565 | orchestrator | 2026-02-14 04:38:56 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953579 | orchestrator | 2026-02-14 04:39:06 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953590 | orchestrator | 2026-02-14 04:39:16 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953601 | orchestrator | 2026-02-14 04:39:26 | INFO  | Waiting for import to complete... 2026-02-14 04:39:42.953613 | orchestrator | 2026-02-14 04:39:36 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-14' successfully completed, reloading images 2026-02-14 04:39:42.953624 | orchestrator | 2026-02-14 04:39:37 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-14' 2026-02-14 04:39:42.953721 | orchestrator | 2026-02-14 04:39:37 | INFO  | Setting internal_version = 2026-02-14 2026-02-14 04:39:42.953735 | orchestrator | 2026-02-14 04:39:37 | INFO  | Setting image_original_user = ubuntu 2026-02-14 04:39:42.953747 | orchestrator | 2026-02-14 04:39:37 | INFO  | Adding tag amphora 2026-02-14 04:39:42.953758 | orchestrator | 2026-02-14 04:39:37 | INFO  | Adding tag os:ubuntu 2026-02-14 04:39:42.953769 | orchestrator | 2026-02-14 04:39:37 | INFO  | Setting property architecture: x86_64 2026-02-14 04:39:42.953780 | orchestrator | 2026-02-14 04:39:38 | INFO  | Setting property hw_disk_bus: scsi 2026-02-14 04:39:42.953791 | orchestrator | 2026-02-14 04:39:38 | INFO  | Setting property hw_rng_model: virtio 2026-02-14 04:39:42.953802 | orchestrator | 2026-02-14 04:39:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-14 04:39:42.953814 | orchestrator | 2026-02-14 04:39:39 | INFO  | Setting property hw_watchdog_action: reset 2026-02-14 04:39:42.953827 | orchestrator | 2026-02-14 04:39:39 | INFO  | Setting property hypervisor_type: qemu 2026-02-14 04:39:42.953838 | orchestrator | 2026-02-14 04:39:39 | INFO  | Setting property os_distro: ubuntu 2026-02-14 04:39:42.953851 | orchestrator | 2026-02-14 04:39:39 | INFO  | Setting property replace_frequency: quarterly 2026-02-14 04:39:42.953864 | orchestrator | 2026-02-14 04:39:39 | INFO  | Setting property uuid_validity: last-1 2026-02-14 04:39:42.953876 | orchestrator | 2026-02-14 04:39:40 | INFO  | Setting property provided_until: none 2026-02-14 04:39:42.953888 | orchestrator | 2026-02-14 04:39:40 | INFO  | Setting property os_purpose: network 2026-02-14 04:39:42.953917 | orchestrator | 2026-02-14 04:39:40 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-14 04:39:42.953930 | orchestrator | 2026-02-14 04:39:40 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-14 04:39:42.953942 | orchestrator | 2026-02-14 04:39:41 | INFO  | Setting property internal_version: 2026-02-14 2026-02-14 04:39:42.953956 | orchestrator | 2026-02-14 04:39:41 | INFO  | Setting property image_original_user: ubuntu 2026-02-14 04:39:42.953969 | orchestrator | 2026-02-14 04:39:41 | INFO  | Setting property os_version: 2026-02-14 2026-02-14 04:39:42.953982 | orchestrator | 2026-02-14 04:39:41 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260214.qcow2 2026-02-14 04:39:42.953994 | orchestrator | 2026-02-14 04:39:42 | INFO  | Setting property image_build_date: 2026-02-14 2026-02-14 04:39:42.954007 | orchestrator | 2026-02-14 04:39:42 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-14' 2026-02-14 04:39:42.954095 | orchestrator | 2026-02-14 04:39:42 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-14' 2026-02-14 04:39:42.954109 | orchestrator | 2026-02-14 04:39:42 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-14 04:39:42.954122 | orchestrator | 2026-02-14 04:39:42 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-14 04:39:42.954136 | orchestrator | 2026-02-14 04:39:42 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-14 04:39:42.954148 | orchestrator | 2026-02-14 04:39:42 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-14 04:39:43.741034 | orchestrator | ok: Runtime: 0:03:23.031337 2026-02-14 04:39:43.760393 | 2026-02-14 04:39:43.760560 | TASK [Run checks] 2026-02-14 04:39:44.521005 | orchestrator | + set -e 2026-02-14 04:39:44.521200 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:39:44.521229 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:39:44.521262 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:39:44.521287 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:39:44.521307 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:39:44.521328 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-14 04:39:44.522789 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:39:44.527807 | orchestrator | 2026-02-14 04:39:44.527885 | orchestrator | # CHECK 2026-02-14 04:39:44.527907 | orchestrator | 2026-02-14 04:39:44.527935 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:39:44.527958 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:39:44.527978 | orchestrator | + echo 2026-02-14 04:39:44.527997 | orchestrator | + echo '# CHECK' 2026-02-14 04:39:44.528018 | orchestrator | + echo 2026-02-14 04:39:44.528045 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-14 04:39:44.529350 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-14 04:39:44.579320 | orchestrator | 2026-02-14 04:39:44.579419 | orchestrator | ## Containers @ testbed-manager 2026-02-14 04:39:44.579436 | orchestrator | 2026-02-14 04:39:44.579450 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-14 04:39:44.579463 | orchestrator | + echo 2026-02-14 04:39:44.579474 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-14 04:39:44.579486 | orchestrator | + echo 2026-02-14 04:39:44.579498 | orchestrator | + osism container testbed-manager ps 2026-02-14 04:39:46.599946 | orchestrator | 2026-02-14 04:39:46 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-14 04:39:47.003384 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-14 04:39:47.003513 | orchestrator | 51f56b3d5737 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-14 04:39:47.003538 | orchestrator | 396a7d61e56c registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-14 04:39:47.003552 | orchestrator | a87580bcf66e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-14 04:39:47.003564 | orchestrator | 41ea2fba5e08 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-14 04:39:47.003576 | orchestrator | f6e2aabdc0a4 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-14 04:39:47.003591 | orchestrator | 2d15df7e30d9 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 58 minutes cephclient 2026-02-14 04:39:47.003604 | orchestrator | 6a4c9a18f701 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-14 04:39:47.003615 | orchestrator | 1ed4bb055a16 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-14 04:39:47.003716 | orchestrator | 5279c7fbccae registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-14 04:39:47.003731 | orchestrator | 537823fd5e9f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-14 04:39:47.003743 | orchestrator | 0074665ab007 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-14 04:39:47.003754 | orchestrator | 7a16c5faa25f registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-14 04:39:47.003766 | orchestrator | 9e93c421b24f registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-14 04:39:47.003778 | orchestrator | 05ef771a19a1 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-14 04:39:47.003810 | orchestrator | 792b938ac839 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-14 04:39:47.003833 | orchestrator | bde7cca414fb registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-14 04:39:47.003845 | orchestrator | 59f796d8c982 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-14 04:39:47.003856 | orchestrator | d426ac9f5737 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-14 04:39:47.003867 | orchestrator | a5b48a2cae63 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-14 04:39:47.003879 | orchestrator | 4a00ef7ed528 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-14 04:39:47.003890 | orchestrator | 081c97028fdc registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-14 04:39:47.003901 | orchestrator | 09cadf2f0ddb registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-14 04:39:47.003921 | orchestrator | 970b28d52790 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-14 04:39:47.003932 | orchestrator | 11ddbe2dedb0 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-14 04:39:47.003943 | orchestrator | 09322f73a203 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-14 04:39:47.003954 | orchestrator | 08417658a93e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-14 04:39:47.003966 | orchestrator | 81b40d7e1a0c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-14 04:39:47.003977 | orchestrator | d3a9c85ffeff registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-14 04:39:47.004036 | orchestrator | fe8ec3471100 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-14 04:39:47.004054 | orchestrator | d6cd12577715 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-14 04:39:47.320754 | orchestrator | 2026-02-14 04:39:47.320879 | orchestrator | ## Images @ testbed-manager 2026-02-14 04:39:47.320907 | orchestrator | 2026-02-14 04:39:47.320925 | orchestrator | + echo 2026-02-14 04:39:47.320944 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-14 04:39:47.320962 | orchestrator | + echo 2026-02-14 04:39:47.320984 | orchestrator | + osism container testbed-manager images 2026-02-14 04:39:49.776406 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-14 04:39:49.776554 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f895694e555a 25 hours ago 239MB 2026-02-14 04:39:49.776574 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 weeks ago 41.4MB 2026-02-14 04:39:49.776587 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-14 04:39:49.776599 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-14 04:39:49.776610 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-14 04:39:49.776621 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-14 04:39:49.776665 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-14 04:39:49.776681 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-14 04:39:49.776692 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-14 04:39:49.776729 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-14 04:39:49.776741 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-14 04:39:49.776752 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-14 04:39:49.776763 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-14 04:39:49.776774 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-14 04:39:49.776785 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-14 04:39:49.776796 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-14 04:39:49.776807 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-14 04:39:49.776818 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-14 04:39:49.776830 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-14 04:39:49.776841 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-14 04:39:49.776852 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-14 04:39:49.776863 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-14 04:39:49.776874 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-14 04:39:49.776885 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-14 04:39:49.776896 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-14 04:39:50.245429 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-14 04:39:50.245545 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-14 04:39:50.307187 | orchestrator | 2026-02-14 04:39:50.307274 | orchestrator | ## Containers @ testbed-node-0 2026-02-14 04:39:50.307286 | orchestrator | 2026-02-14 04:39:50.307294 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-14 04:39:50.307302 | orchestrator | + echo 2026-02-14 04:39:50.307310 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-14 04:39:50.307319 | orchestrator | + echo 2026-02-14 04:39:50.307327 | orchestrator | + osism container testbed-node-0 ps 2026-02-14 04:39:52.749596 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-14 04:39:52.749763 | orchestrator | 8b552cd516d0 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-14 04:39:52.749803 | orchestrator | d21769750e51 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-14 04:39:52.749815 | orchestrator | 9612270bd9f3 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-14 04:39:52.749825 | orchestrator | b5f8a252519c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-14 04:39:52.749857 | orchestrator | 79c90c360070 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-14 04:39:52.749868 | orchestrator | 40c64192b5cd registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-14 04:39:52.749971 | orchestrator | 8fcbaec5d362 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-14 04:39:52.749987 | orchestrator | 9dad1330ba1f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-14 04:39:52.749997 | orchestrator | 7a5d0002a2fd registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-14 04:39:52.750008 | orchestrator | 01cdf04436f3 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-14 04:39:52.750059 | orchestrator | 056587ab7c1b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-14 04:39:52.750072 | orchestrator | 20e86f2d771e registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_api 2026-02-14 04:39:52.750082 | orchestrator | 021497c6d363 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-14 04:39:52.750091 | orchestrator | 6247f789605d registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-14 04:39:52.750105 | orchestrator | 66ad4ee3eccb registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-14 04:39:52.750122 | orchestrator | a0e6e439d630 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-14 04:39:52.750138 | orchestrator | 7ef84d15816d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-14 04:39:52.750155 | orchestrator | 0373e894107e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-14 04:39:52.750173 | orchestrator | f8f0295c67c6 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-14 04:39:52.750223 | orchestrator | ef5fb648b26a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-14 04:39:52.750235 | orchestrator | e6885a678ee8 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-14 04:39:52.750245 | orchestrator | 9b3c11dfa085 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-14 04:39:52.750265 | orchestrator | 6da535e2002a registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-14 04:39:52.750274 | orchestrator | 56111e206711 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-14 04:39:52.750284 | orchestrator | 54c060caf2d1 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-14 04:39:52.750298 | orchestrator | 22e0a6f0e354 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-14 04:39:52.750308 | orchestrator | b65c26fc50aa registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-14 04:39:52.750317 | orchestrator | 3fc9f7cfc838 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-14 04:39:52.750327 | orchestrator | c2ed40fcb38c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-14 04:39:52.750337 | orchestrator | 3f72612bef73 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-14 04:39:52.750346 | orchestrator | 5a14d7f29ba6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-14 04:39:52.750356 | orchestrator | 08c2d96eb186 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-14 04:39:52.750368 | orchestrator | a74dc0db31df registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-14 04:39:52.750383 | orchestrator | 92cb762f81f9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-14 04:39:52.750399 | orchestrator | 3e1f371e478f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-14 04:39:52.750415 | orchestrator | 017054755aca registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-14 04:39:52.750431 | orchestrator | 432066687d8b registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-14 04:39:52.750447 | orchestrator | fa312c047f01 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-14 04:39:52.750464 | orchestrator | 74701e8b81c5 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-14 04:39:52.750493 | orchestrator | 65fcfab27fe0 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-14 04:39:52.750518 | orchestrator | 90b04039d1ee registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-14 04:39:52.750528 | orchestrator | 6dec89027a67 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-14 04:39:52.750543 | orchestrator | e1be47e64050 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-14 04:39:52.750553 | orchestrator | 80629b9be242 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-14 04:39:52.750942 | orchestrator | 98581148d11d registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-14 04:39:52.750965 | orchestrator | 0ea4fbbd1558 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-14 04:39:52.750976 | orchestrator | ca1dfa01ea8c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-14 04:39:52.750987 | orchestrator | 8e5557a21df9 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-14 04:39:52.750997 | orchestrator | 8dc4b0e6d9f4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-14 04:39:52.751008 | orchestrator | 2574cfde61cc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-02-14 04:39:52.751019 | orchestrator | bdb5497a9a49 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-14 04:39:52.751030 | orchestrator | 775cd2ba237c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-14 04:39:52.751040 | orchestrator | bf77e02640ef registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-14 04:39:52.751051 | orchestrator | b02ef219d27b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-14 04:39:52.751062 | orchestrator | 6fe77f556a09 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-14 04:39:52.751073 | orchestrator | 1802968dc540 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-14 04:39:52.751091 | orchestrator | 4293e34853b6 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-14 04:39:52.751102 | orchestrator | 3459ecbca5a4 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-14 04:39:52.751123 | orchestrator | 9bbb84ba76fb registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-14 04:39:52.751134 | orchestrator | 8563fb7f6b43 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-14 04:39:52.751144 | orchestrator | 4c0f46daa092 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-14 04:39:52.751155 | orchestrator | 36ca5fa5a10f registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-14 04:39:52.751166 | orchestrator | e1dcee90ce2d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-14 04:39:52.751176 | orchestrator | 406fd175a873 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-14 04:39:52.751187 | orchestrator | 5aebbdd574d5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-14 04:39:52.751206 | orchestrator | 33995d348cbf registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-14 04:39:52.751218 | orchestrator | cf3799a6a964 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-14 04:39:52.751228 | orchestrator | 3e529622312e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-14 04:39:52.751239 | orchestrator | bc1e3e2c44ae registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-14 04:39:52.751250 | orchestrator | 005e1a78f1aa registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-14 04:39:52.751261 | orchestrator | c544478cf4b2 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-14 04:39:53.137987 | orchestrator | 2026-02-14 04:39:53.138080 | orchestrator | ## Images @ testbed-node-0 2026-02-14 04:39:53.138088 | orchestrator | 2026-02-14 04:39:53.138093 | orchestrator | + echo 2026-02-14 04:39:53.138098 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-14 04:39:53.138103 | orchestrator | + echo 2026-02-14 04:39:53.138108 | orchestrator | + osism container testbed-node-0 images 2026-02-14 04:39:55.517896 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-14 04:39:55.518095 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-14 04:39:55.518117 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-14 04:39:55.518129 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-14 04:39:55.518141 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-14 04:39:55.518176 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-14 04:39:55.518188 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-14 04:39:55.518199 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-14 04:39:55.518210 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-14 04:39:55.518221 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-14 04:39:55.518232 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-14 04:39:55.518243 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-14 04:39:55.518254 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-14 04:39:55.518265 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-14 04:39:55.518276 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-14 04:39:55.518286 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-14 04:39:55.518297 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-14 04:39:55.518308 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-14 04:39:55.518319 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-14 04:39:55.518330 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-14 04:39:55.518341 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-14 04:39:55.518352 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-14 04:39:55.518363 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-14 04:39:55.518374 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-14 04:39:55.518387 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-14 04:39:55.518400 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-14 04:39:55.518412 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-14 04:39:55.518424 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-14 04:39:55.518443 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-14 04:39:55.518456 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-14 04:39:55.518469 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-14 04:39:55.518489 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-14 04:39:55.518538 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-14 04:39:55.518552 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-14 04:39:55.518565 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-14 04:39:55.518577 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-14 04:39:55.518590 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-14 04:39:55.518603 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-14 04:39:55.518615 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-14 04:39:55.518687 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-14 04:39:55.518701 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-14 04:39:55.518714 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-14 04:39:55.518726 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-14 04:39:55.518739 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-14 04:39:55.518749 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-14 04:39:55.518760 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-14 04:39:55.518771 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-14 04:39:55.518782 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-14 04:39:55.518794 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-14 04:39:55.518805 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-14 04:39:55.518816 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-14 04:39:55.518827 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-14 04:39:55.518837 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-14 04:39:55.518848 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-14 04:39:55.518859 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-14 04:39:55.518870 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-14 04:39:55.518881 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-14 04:39:55.518900 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-14 04:39:55.518911 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-14 04:39:55.518928 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-14 04:39:55.518939 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-14 04:39:55.518950 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-14 04:39:55.518961 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-14 04:39:55.518972 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-14 04:39:55.518992 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-14 04:39:55.519004 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-14 04:39:55.519015 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-14 04:39:55.519025 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-14 04:39:55.519036 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-14 04:39:55.519047 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-14 04:39:55.827131 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-14 04:39:55.827915 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-14 04:39:55.873865 | orchestrator | 2026-02-14 04:39:55.873965 | orchestrator | ## Containers @ testbed-node-1 2026-02-14 04:39:55.873987 | orchestrator | 2026-02-14 04:39:55.873999 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-14 04:39:55.874011 | orchestrator | + echo 2026-02-14 04:39:55.874079 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-14 04:39:55.874091 | orchestrator | + echo 2026-02-14 04:39:55.874103 | orchestrator | + osism container testbed-node-1 ps 2026-02-14 04:39:58.308257 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-14 04:39:58.308363 | orchestrator | 64c0747b06d1 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-14 04:39:58.308400 | orchestrator | 1f5098bc803b registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-14 04:39:58.308425 | orchestrator | 63672c4e71d5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-14 04:39:58.308437 | orchestrator | 9993060a4fdd registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-14 04:39:58.308450 | orchestrator | 3db9b43cadb7 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-14 04:39:58.308461 | orchestrator | ae550c34e07b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-14 04:39:58.308498 | orchestrator | 7551582757cd registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-14 04:39:58.308510 | orchestrator | a5cfa9116c85 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-14 04:39:58.308521 | orchestrator | c70b90480b2f registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-14 04:39:58.308532 | orchestrator | 44edc01ed241 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-14 04:39:58.308543 | orchestrator | 3fbaf2371697 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-14 04:39:58.308555 | orchestrator | ea1580653784 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-14 04:39:58.308584 | orchestrator | 773c3ae6b4f8 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-14 04:39:58.308596 | orchestrator | 944b3dec3e6f registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-14 04:39:58.308607 | orchestrator | 4dd998b4a8d6 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-14 04:39:58.308618 | orchestrator | 2e42131f2e25 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-14 04:39:58.308681 | orchestrator | b50ef61daa3c registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-14 04:39:58.308694 | orchestrator | 8913bee56b0a registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-14 04:39:58.308705 | orchestrator | b34aabca73f5 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-14 04:39:58.308734 | orchestrator | caa3c39e2e47 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-14 04:39:58.308746 | orchestrator | 9cdb7b664a50 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-14 04:39:58.308757 | orchestrator | b5862a584ab6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-14 04:39:58.308768 | orchestrator | 158da85be32b registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-14 04:39:58.309472 | orchestrator | 096536eb4cdc registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-14 04:39:58.309561 | orchestrator | 4c65a43d75cb registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-14 04:39:58.309572 | orchestrator | edc85380bd32 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-14 04:39:58.309579 | orchestrator | bf7f2cd8c729 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-14 04:39:58.309587 | orchestrator | f6cba22644d9 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-14 04:39:58.309595 | orchestrator | 72fd3d4f7bce registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-14 04:39:58.309602 | orchestrator | e983e973d311 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-14 04:39:58.309609 | orchestrator | 3c6b9eda4d52 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-14 04:39:58.309618 | orchestrator | 1fc9047c6565 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-14 04:39:58.309656 | orchestrator | 7647310a2f79 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-14 04:39:58.309664 | orchestrator | 1b3c7d457630 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-14 04:39:58.309671 | orchestrator | bd27004e8061 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-14 04:39:58.309679 | orchestrator | 26e40fdd4ebd registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-14 04:39:58.309693 | orchestrator | 09a7003f8b19 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-14 04:39:58.309701 | orchestrator | 9b9c10eec2cf registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-14 04:39:58.309708 | orchestrator | cdaeb28420b3 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-14 04:39:58.309716 | orchestrator | 31ad8906d733 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 37 minutes (healthy) horizon 2026-02-14 04:39:58.309723 | orchestrator | a80d1b7a38be registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-14 04:39:58.309735 | orchestrator | 7a358f1a8a82 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-14 04:39:58.309742 | orchestrator | 4801a2a6c081 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-14 04:39:58.309749 | orchestrator | ac108b5547be registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-14 04:39:58.309769 | orchestrator | 0857b14566a6 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-14 04:39:58.309777 | orchestrator | 5dd4b0690b65 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-14 04:39:58.309785 | orchestrator | 8961bd730253 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-14 04:39:58.309792 | orchestrator | 8a82e1bb8fa6 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-14 04:39:58.309799 | orchestrator | ba9a0f73eb26 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-14 04:39:58.309807 | orchestrator | f25a94d0c75f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-02-14 04:39:58.309815 | orchestrator | 69f2d9621fcf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-14 04:39:58.309822 | orchestrator | 26dcb1313f5c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-14 04:39:58.309829 | orchestrator | 84a464999ce6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-14 04:39:58.309836 | orchestrator | 3e25e1883b6d registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-14 04:39:58.309844 | orchestrator | 4ade9c466f10 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-14 04:39:58.309851 | orchestrator | a32f048b21e8 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-14 04:39:58.309858 | orchestrator | ab4438c27046 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-14 04:39:58.309866 | orchestrator | d3b6476eee7a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-14 04:39:58.309873 | orchestrator | 021d195580ae registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-14 04:39:58.309885 | orchestrator | 46e272276050 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-14 04:39:58.309892 | orchestrator | ba1eb349ec28 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-14 04:39:58.309900 | orchestrator | af018177e420 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-14 04:39:58.309907 | orchestrator | 5743a80a61e9 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-14 04:39:58.309914 | orchestrator | 5013d3139e91 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-14 04:39:58.309929 | orchestrator | afa80e219722 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-14 04:39:58.309937 | orchestrator | 27f2f09e68be registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up About an hour keepalived 2026-02-14 04:39:58.309945 | orchestrator | 11d29b0a1263 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-14 04:39:58.309952 | orchestrator | fad28c2aab87 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-14 04:39:58.309960 | orchestrator | cf7f53654273 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-14 04:39:58.309970 | orchestrator | abb94182d8cf registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-14 04:39:58.309977 | orchestrator | 7d412ffc1021 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-14 04:39:58.638382 | orchestrator | 2026-02-14 04:39:58.638475 | orchestrator | ## Images @ testbed-node-1 2026-02-14 04:39:58.638491 | orchestrator | 2026-02-14 04:39:58.638501 | orchestrator | + echo 2026-02-14 04:39:58.638512 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-14 04:39:58.638524 | orchestrator | + echo 2026-02-14 04:39:58.638535 | orchestrator | + osism container testbed-node-1 images 2026-02-14 04:40:01.176599 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-14 04:40:01.176795 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-14 04:40:01.177069 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-14 04:40:01.177091 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-14 04:40:01.177103 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-14 04:40:01.177115 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-14 04:40:01.177125 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-14 04:40:01.177160 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-14 04:40:01.177172 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-14 04:40:01.177183 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-14 04:40:01.177194 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-14 04:40:01.177204 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-14 04:40:01.177215 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-14 04:40:01.177226 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-14 04:40:01.177237 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-14 04:40:01.177247 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-14 04:40:01.177258 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-14 04:40:01.177269 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-14 04:40:01.177279 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-14 04:40:01.177298 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-14 04:40:01.177316 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-14 04:40:01.177336 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-14 04:40:01.177353 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-14 04:40:01.177371 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-14 04:40:01.177389 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-14 04:40:01.177407 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-14 04:40:01.177425 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-14 04:40:01.177443 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-14 04:40:01.177461 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-14 04:40:01.177479 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-14 04:40:01.177496 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-14 04:40:01.177516 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-14 04:40:01.177535 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-14 04:40:01.177565 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-14 04:40:01.177577 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-14 04:40:01.177602 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-14 04:40:01.177613 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-14 04:40:01.177691 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-14 04:40:01.177726 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-14 04:40:01.177740 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-14 04:40:01.177753 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-14 04:40:01.177766 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-14 04:40:01.177778 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-14 04:40:01.177791 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-14 04:40:01.177804 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-14 04:40:01.177816 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-14 04:40:01.177829 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-14 04:40:01.177842 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-14 04:40:01.177855 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-14 04:40:01.177867 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-14 04:40:01.177880 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-14 04:40:01.177892 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-14 04:40:01.177904 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-14 04:40:01.177917 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-14 04:40:01.177929 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-14 04:40:01.177942 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-14 04:40:01.177955 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-14 04:40:01.177967 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-14 04:40:01.177979 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-14 04:40:01.177992 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-14 04:40:01.178012 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-14 04:40:01.178082 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-14 04:40:01.178094 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-14 04:40:01.178105 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-14 04:40:01.178116 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-14 04:40:01.178127 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-14 04:40:01.178137 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-14 04:40:01.178158 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-14 04:40:01.178169 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-14 04:40:01.178180 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-14 04:40:01.492620 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-14 04:40:01.493613 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-14 04:40:01.552912 | orchestrator | 2026-02-14 04:40:01.552996 | orchestrator | ## Containers @ testbed-node-2 2026-02-14 04:40:01.553009 | orchestrator | 2026-02-14 04:40:01.553018 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-14 04:40:01.553027 | orchestrator | + echo 2026-02-14 04:40:01.553035 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-14 04:40:01.553044 | orchestrator | + echo 2026-02-14 04:40:01.553053 | orchestrator | + osism container testbed-node-2 ps 2026-02-14 04:40:04.047163 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-14 04:40:04.047268 | orchestrator | 06b9837b9d1a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-14 04:40:04.047285 | orchestrator | 46949db1e8db registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-14 04:40:04.047375 | orchestrator | 6579c0abac25 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-14 04:40:04.047395 | orchestrator | 0d35db895be2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-14 04:40:04.047412 | orchestrator | 84a6ccb1d3db registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-14 04:40:04.047424 | orchestrator | be6f4eff9a50 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-14 04:40:04.047435 | orchestrator | 839836da665c registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-14 04:40:04.047448 | orchestrator | c483f3dc2563 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-14 04:40:04.047480 | orchestrator | 370486d5dfab registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-14 04:40:04.047492 | orchestrator | 2d41d5471e3e registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-14 04:40:04.047503 | orchestrator | 81314b387b9b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-14 04:40:04.047514 | orchestrator | b585d44fe3d8 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-14 04:40:04.047546 | orchestrator | 4bd0c94db84f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-14 04:40:04.047558 | orchestrator | 669d05442101 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-14 04:40:04.047569 | orchestrator | eef6ee34853e registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-14 04:40:04.047580 | orchestrator | ffec70e1043b registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-14 04:40:04.047591 | orchestrator | 872204d4727a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-14 04:40:04.047602 | orchestrator | d771ec753d18 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-14 04:40:04.047613 | orchestrator | 28b89d1ec97b registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-14 04:40:04.047714 | orchestrator | 4a6fe8cba8f0 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-14 04:40:04.047730 | orchestrator | ca24a784df35 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-14 04:40:04.047744 | orchestrator | 1f9f131cf624 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-14 04:40:04.047757 | orchestrator | 7621b9d57a66 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-14 04:40:04.047840 | orchestrator | 0004ba8999d5 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-14 04:40:04.047854 | orchestrator | c93311337973 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-14 04:40:04.047876 | orchestrator | 1d9b2b41574e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-14 04:40:04.047887 | orchestrator | f609449bda9c registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) designate_central 2026-02-14 04:40:04.047898 | orchestrator | 747b4c781321 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-14 04:40:04.047909 | orchestrator | 083f34b3cf44 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-14 04:40:04.047920 | orchestrator | de993745ccd2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-14 04:40:04.047931 | orchestrator | 7e2246e50cf3 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-14 04:40:04.047942 | orchestrator | eb808ac4b29a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-14 04:40:04.047953 | orchestrator | 56e8f01be5c8 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-14 04:40:04.047964 | orchestrator | 6fe6e80d9ea8 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-14 04:40:04.047975 | orchestrator | 83281e4eb3d3 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-14 04:40:04.047986 | orchestrator | 9325aa8bc9cb registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-14 04:40:04.048001 | orchestrator | 36c55f98888a registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-14 04:40:04.048020 | orchestrator | ad5403f66874 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-14 04:40:04.048038 | orchestrator | 310e598abb07 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-14 04:40:04.048057 | orchestrator | 2b97f113642d registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-14 04:40:04.048255 | orchestrator | 708365ce6664 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-14 04:40:04.048281 | orchestrator | 15e7c313d134 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-14 04:40:04.048293 | orchestrator | de3211f2bbfe registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-14 04:40:04.048317 | orchestrator | 2572ef9e894c registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-14 04:40:04.048335 | orchestrator | a75300409d96 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-14 04:40:04.048354 | orchestrator | 72f7757fb25d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-14 04:40:04.048378 | orchestrator | ed12b7deb338 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-14 04:40:04.048402 | orchestrator | c1d7b8b34dff registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-14 04:40:04.048420 | orchestrator | 2ff89f2decfd registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-14 04:40:04.048437 | orchestrator | b625f446d07f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-02-14 04:40:04.048456 | orchestrator | 24732d56e59b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-14 04:40:04.048485 | orchestrator | 7aff8e7c54ed registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-14 04:40:04.048502 | orchestrator | 89e3021a9eee registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-14 04:40:04.048528 | orchestrator | a5ee4c9e04a8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-14 04:40:04.048546 | orchestrator | 362942bc06f1 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-14 04:40:04.048565 | orchestrator | 880920aaabac registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-14 04:40:04.048582 | orchestrator | 8abf40b06036 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-14 04:40:04.048599 | orchestrator | 8e2d88a2a3ce registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-14 04:40:04.048616 | orchestrator | a10b6ca418f6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-14 04:40:04.048667 | orchestrator | 28d7dfbc0355 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-14 04:40:04.048700 | orchestrator | 22182dc347c6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-14 04:40:04.048733 | orchestrator | 240bb4cae9da registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-14 04:40:04.048752 | orchestrator | 700473cc6262 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-14 04:40:04.048770 | orchestrator | 5052f03950ad registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-14 04:40:04.048789 | orchestrator | 01bcd1461cad registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-14 04:40:04.048807 | orchestrator | bc5be756fd06 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-14 04:40:04.048824 | orchestrator | 8afb39f9ed90 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-14 04:40:04.048842 | orchestrator | 716a1d98b2e8 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-14 04:40:04.048862 | orchestrator | b91fb102ddc3 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-14 04:40:04.048881 | orchestrator | d607f64f6b39 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-14 04:40:04.048900 | orchestrator | f981d4a2d5f1 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-14 04:40:04.377323 | orchestrator | 2026-02-14 04:40:04.377417 | orchestrator | ## Images @ testbed-node-2 2026-02-14 04:40:04.377435 | orchestrator | 2026-02-14 04:40:04.377447 | orchestrator | + echo 2026-02-14 04:40:04.377460 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-14 04:40:04.377473 | orchestrator | + echo 2026-02-14 04:40:04.377486 | orchestrator | + osism container testbed-node-2 images 2026-02-14 04:40:06.916091 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-14 04:40:06.916188 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-14 04:40:06.916214 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-14 04:40:06.916235 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-14 04:40:06.916273 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-14 04:40:06.916286 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-14 04:40:06.916297 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-14 04:40:06.916308 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-14 04:40:06.916319 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-14 04:40:06.916347 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-14 04:40:06.916359 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-14 04:40:06.916374 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-14 04:40:06.916399 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-14 04:40:06.916412 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-14 04:40:06.916423 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-14 04:40:06.916434 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-14 04:40:06.916445 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-14 04:40:06.916455 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-14 04:40:06.916466 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-14 04:40:06.916477 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-14 04:40:06.916488 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-14 04:40:06.916498 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-14 04:40:06.916509 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-14 04:40:06.916520 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-14 04:40:06.916531 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-14 04:40:06.916541 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-14 04:40:06.916552 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-14 04:40:06.916563 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-14 04:40:06.916574 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-14 04:40:06.916584 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-14 04:40:06.916595 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-14 04:40:06.916606 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-14 04:40:06.916662 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-14 04:40:06.916685 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-14 04:40:06.916705 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-14 04:40:06.916725 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-14 04:40:06.916766 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-14 04:40:06.916781 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-14 04:40:06.916794 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-14 04:40:06.916815 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-14 04:40:06.916828 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-14 04:40:06.916841 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-14 04:40:06.916856 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-14 04:40:06.916868 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-14 04:40:06.916881 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-14 04:40:06.916893 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-14 04:40:06.916906 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-14 04:40:06.916919 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-14 04:40:06.916931 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-14 04:40:06.916944 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-14 04:40:06.916957 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-14 04:40:06.916970 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-14 04:40:06.916983 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-14 04:40:06.917003 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-14 04:40:06.917023 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-14 04:40:06.917043 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-14 04:40:06.917061 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-14 04:40:06.917077 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-14 04:40:06.917088 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-14 04:40:06.917098 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-14 04:40:06.917110 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-14 04:40:06.917130 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-14 04:40:06.917162 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-14 04:40:06.917183 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-14 04:40:06.917208 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-14 04:40:06.917219 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-14 04:40:06.917231 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-14 04:40:06.917241 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-14 04:40:06.917258 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-14 04:40:06.917269 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-14 04:40:07.237898 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-14 04:40:07.245882 | orchestrator | + set -e 2026-02-14 04:40:07.245921 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 04:40:07.245929 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 04:40:07.245936 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 04:40:07.245942 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 04:40:07.245948 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 04:40:07.245955 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 04:40:07.245962 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 04:40:07.245968 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:40:07.245975 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:40:07.245981 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 04:40:07.245987 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 04:40:07.245993 | orchestrator | ++ export ARA=false 2026-02-14 04:40:07.245999 | orchestrator | ++ ARA=false 2026-02-14 04:40:07.246006 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 04:40:07.246012 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 04:40:07.246049 | orchestrator | ++ export TEMPEST=false 2026-02-14 04:40:07.246056 | orchestrator | ++ TEMPEST=false 2026-02-14 04:40:07.246062 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 04:40:07.246068 | orchestrator | ++ IS_ZUUL=true 2026-02-14 04:40:07.246074 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:40:07.246081 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:40:07.246087 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 04:40:07.246094 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 04:40:07.246100 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 04:40:07.246106 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 04:40:07.246113 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 04:40:07.246119 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 04:40:07.246126 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 04:40:07.246132 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 04:40:07.246138 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 04:40:07.246145 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-14 04:40:07.253828 | orchestrator | + set -e 2026-02-14 04:40:07.253852 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:40:07.253861 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:40:07.253870 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:40:07.253879 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:40:07.253888 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:40:07.253897 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-14 04:40:07.254598 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:40:07.258979 | orchestrator | 2026-02-14 04:40:07.259010 | orchestrator | # Ceph status 2026-02-14 04:40:07.259021 | orchestrator | 2026-02-14 04:40:07.259033 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:40:07.259044 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:40:07.259055 | orchestrator | + echo 2026-02-14 04:40:07.259066 | orchestrator | + echo '# Ceph status' 2026-02-14 04:40:07.259144 | orchestrator | + echo 2026-02-14 04:40:07.259157 | orchestrator | + ceph -s 2026-02-14 04:40:07.845158 | orchestrator | cluster: 2026-02-14 04:40:07.845246 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-14 04:40:07.845264 | orchestrator | health: HEALTH_OK 2026-02-14 04:40:07.845277 | orchestrator | 2026-02-14 04:40:07.845289 | orchestrator | services: 2026-02-14 04:40:07.845301 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 69m) 2026-02-14 04:40:07.845313 | orchestrator | mgr: testbed-node-1(active, since 57m), standbys: testbed-node-2, testbed-node-0 2026-02-14 04:40:07.845334 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-14 04:40:07.845354 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 66m) 2026-02-14 04:40:07.845374 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-14 04:40:07.845392 | orchestrator | 2026-02-14 04:40:07.845410 | orchestrator | data: 2026-02-14 04:40:07.845428 | orchestrator | volumes: 1/1 healthy 2026-02-14 04:40:07.845446 | orchestrator | pools: 14 pools, 401 pgs 2026-02-14 04:40:07.845466 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-14 04:40:07.845486 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-14 04:40:07.845504 | orchestrator | pgs: 401 active+clean 2026-02-14 04:40:07.845521 | orchestrator | 2026-02-14 04:40:07.881237 | orchestrator | 2026-02-14 04:40:07.881312 | orchestrator | # Ceph versions 2026-02-14 04:40:07.881327 | orchestrator | 2026-02-14 04:40:07.881338 | orchestrator | + echo 2026-02-14 04:40:07.881350 | orchestrator | + echo '# Ceph versions' 2026-02-14 04:40:07.881362 | orchestrator | + echo 2026-02-14 04:40:07.881373 | orchestrator | + ceph versions 2026-02-14 04:40:08.450682 | orchestrator | { 2026-02-14 04:40:08.450792 | orchestrator | "mon": { 2026-02-14 04:40:08.450817 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-14 04:40:08.450830 | orchestrator | }, 2026-02-14 04:40:08.450842 | orchestrator | "mgr": { 2026-02-14 04:40:08.450853 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-14 04:40:08.450864 | orchestrator | }, 2026-02-14 04:40:08.450875 | orchestrator | "osd": { 2026-02-14 04:40:08.450886 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-14 04:40:08.450897 | orchestrator | }, 2026-02-14 04:40:08.450908 | orchestrator | "mds": { 2026-02-14 04:40:08.450918 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-14 04:40:08.450929 | orchestrator | }, 2026-02-14 04:40:08.450940 | orchestrator | "rgw": { 2026-02-14 04:40:08.450951 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-14 04:40:08.450962 | orchestrator | }, 2026-02-14 04:40:08.450973 | orchestrator | "overall": { 2026-02-14 04:40:08.450985 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-14 04:40:08.450996 | orchestrator | } 2026-02-14 04:40:08.451007 | orchestrator | } 2026-02-14 04:40:08.484088 | orchestrator | 2026-02-14 04:40:08.484163 | orchestrator | # Ceph OSD tree 2026-02-14 04:40:08.484176 | orchestrator | 2026-02-14 04:40:08.484188 | orchestrator | + echo 2026-02-14 04:40:08.484200 | orchestrator | + echo '# Ceph OSD tree' 2026-02-14 04:40:08.484212 | orchestrator | + echo 2026-02-14 04:40:08.484223 | orchestrator | + ceph osd df tree 2026-02-14 04:40:08.994866 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-14 04:40:08.994969 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 398 MiB 113 GiB 5.89 1.00 - root default 2026-02-14 04:40:08.994984 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-3 2026-02-14 04:40:08.994995 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.37 1.08 199 up osd.0 2026-02-14 04:40:08.995004 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 66 MiB 19 GiB 5.38 0.91 193 up osd.5 2026-02-14 04:40:08.995014 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-02-14 04:40:08.995023 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.0 GiB 947 MiB 1 KiB 78 MiB 19 GiB 5.01 0.85 209 up osd.1 2026-02-14 04:40:08.995054 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.82 1.16 181 up osd.3 2026-02-14 04:40:08.995064 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-02-14 04:40:08.995075 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.96 1.18 198 up osd.2 2026-02-14 04:40:08.995085 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 980 MiB 915 MiB 1 KiB 66 MiB 19 GiB 4.79 0.81 190 up osd.4 2026-02-14 04:40:08.995094 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 398 MiB 113 GiB 5.89 2026-02-14 04:40:08.995105 | orchestrator | MIN/MAX VAR: 0.81/1.18 STDDEV: 0.87 2026-02-14 04:40:09.034896 | orchestrator | 2026-02-14 04:40:09.034971 | orchestrator | # Ceph monitor status 2026-02-14 04:40:09.034985 | orchestrator | 2026-02-14 04:40:09.034997 | orchestrator | + echo 2026-02-14 04:40:09.035009 | orchestrator | + echo '# Ceph monitor status' 2026-02-14 04:40:09.035020 | orchestrator | + echo 2026-02-14 04:40:09.035031 | orchestrator | + ceph mon stat 2026-02-14 04:40:09.614105 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-14 04:40:09.647189 | orchestrator | 2026-02-14 04:40:09.647275 | orchestrator | # Ceph quorum status 2026-02-14 04:40:09.647291 | orchestrator | 2026-02-14 04:40:09.647304 | orchestrator | + echo 2026-02-14 04:40:09.647316 | orchestrator | + echo '# Ceph quorum status' 2026-02-14 04:40:09.647327 | orchestrator | + echo 2026-02-14 04:40:09.647473 | orchestrator | + ceph quorum_status 2026-02-14 04:40:09.647773 | orchestrator | + jq 2026-02-14 04:40:10.295138 | orchestrator | { 2026-02-14 04:40:10.295230 | orchestrator | "election_epoch": 8, 2026-02-14 04:40:10.295245 | orchestrator | "quorum": [ 2026-02-14 04:40:10.295257 | orchestrator | 0, 2026-02-14 04:40:10.295268 | orchestrator | 1, 2026-02-14 04:40:10.295279 | orchestrator | 2 2026-02-14 04:40:10.295290 | orchestrator | ], 2026-02-14 04:40:10.295300 | orchestrator | "quorum_names": [ 2026-02-14 04:40:10.295312 | orchestrator | "testbed-node-0", 2026-02-14 04:40:10.295323 | orchestrator | "testbed-node-1", 2026-02-14 04:40:10.295333 | orchestrator | "testbed-node-2" 2026-02-14 04:40:10.295344 | orchestrator | ], 2026-02-14 04:40:10.295356 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-14 04:40:10.295367 | orchestrator | "quorum_age": 4166, 2026-02-14 04:40:10.295378 | orchestrator | "features": { 2026-02-14 04:40:10.295389 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-14 04:40:10.295400 | orchestrator | "quorum_mon": [ 2026-02-14 04:40:10.295411 | orchestrator | "kraken", 2026-02-14 04:40:10.295421 | orchestrator | "luminous", 2026-02-14 04:40:10.295432 | orchestrator | "mimic", 2026-02-14 04:40:10.295443 | orchestrator | "osdmap-prune", 2026-02-14 04:40:10.295454 | orchestrator | "nautilus", 2026-02-14 04:40:10.295464 | orchestrator | "octopus", 2026-02-14 04:40:10.295475 | orchestrator | "pacific", 2026-02-14 04:40:10.295485 | orchestrator | "elector-pinging", 2026-02-14 04:40:10.295496 | orchestrator | "quincy", 2026-02-14 04:40:10.295507 | orchestrator | "reef" 2026-02-14 04:40:10.295518 | orchestrator | ] 2026-02-14 04:40:10.295528 | orchestrator | }, 2026-02-14 04:40:10.295539 | orchestrator | "monmap": { 2026-02-14 04:40:10.295550 | orchestrator | "epoch": 1, 2026-02-14 04:40:10.295561 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-14 04:40:10.295572 | orchestrator | "modified": "2026-02-14T03:30:26.603149Z", 2026-02-14 04:40:10.295584 | orchestrator | "created": "2026-02-14T03:30:26.603149Z", 2026-02-14 04:40:10.295595 | orchestrator | "min_mon_release": 18, 2026-02-14 04:40:10.295606 | orchestrator | "min_mon_release_name": "reef", 2026-02-14 04:40:10.295616 | orchestrator | "election_strategy": 1, 2026-02-14 04:40:10.295850 | orchestrator | "disallowed_leaders: ": "", 2026-02-14 04:40:10.295862 | orchestrator | "stretch_mode": false, 2026-02-14 04:40:10.295873 | orchestrator | "tiebreaker_mon": "", 2026-02-14 04:40:10.295883 | orchestrator | "removed_ranks: ": "", 2026-02-14 04:40:10.295894 | orchestrator | "features": { 2026-02-14 04:40:10.295905 | orchestrator | "persistent": [ 2026-02-14 04:40:10.295915 | orchestrator | "kraken", 2026-02-14 04:40:10.295948 | orchestrator | "luminous", 2026-02-14 04:40:10.295959 | orchestrator | "mimic", 2026-02-14 04:40:10.295970 | orchestrator | "osdmap-prune", 2026-02-14 04:40:10.295981 | orchestrator | "nautilus", 2026-02-14 04:40:10.295991 | orchestrator | "octopus", 2026-02-14 04:40:10.296002 | orchestrator | "pacific", 2026-02-14 04:40:10.296012 | orchestrator | "elector-pinging", 2026-02-14 04:40:10.296023 | orchestrator | "quincy", 2026-02-14 04:40:10.296034 | orchestrator | "reef" 2026-02-14 04:40:10.296045 | orchestrator | ], 2026-02-14 04:40:10.296056 | orchestrator | "optional": [] 2026-02-14 04:40:10.296066 | orchestrator | }, 2026-02-14 04:40:10.296077 | orchestrator | "mons": [ 2026-02-14 04:40:10.296088 | orchestrator | { 2026-02-14 04:40:10.296113 | orchestrator | "rank": 0, 2026-02-14 04:40:10.296125 | orchestrator | "name": "testbed-node-0", 2026-02-14 04:40:10.296136 | orchestrator | "public_addrs": { 2026-02-14 04:40:10.296147 | orchestrator | "addrvec": [ 2026-02-14 04:40:10.296158 | orchestrator | { 2026-02-14 04:40:10.296168 | orchestrator | "type": "v2", 2026-02-14 04:40:10.296180 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-14 04:40:10.296191 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296201 | orchestrator | }, 2026-02-14 04:40:10.296212 | orchestrator | { 2026-02-14 04:40:10.296223 | orchestrator | "type": "v1", 2026-02-14 04:40:10.296234 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-14 04:40:10.296244 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296255 | orchestrator | } 2026-02-14 04:40:10.296266 | orchestrator | ] 2026-02-14 04:40:10.296277 | orchestrator | }, 2026-02-14 04:40:10.296287 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-14 04:40:10.296298 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-14 04:40:10.296309 | orchestrator | "priority": 0, 2026-02-14 04:40:10.296319 | orchestrator | "weight": 0, 2026-02-14 04:40:10.296330 | orchestrator | "crush_location": "{}" 2026-02-14 04:40:10.296341 | orchestrator | }, 2026-02-14 04:40:10.296351 | orchestrator | { 2026-02-14 04:40:10.296362 | orchestrator | "rank": 1, 2026-02-14 04:40:10.296373 | orchestrator | "name": "testbed-node-1", 2026-02-14 04:40:10.296383 | orchestrator | "public_addrs": { 2026-02-14 04:40:10.296394 | orchestrator | "addrvec": [ 2026-02-14 04:40:10.296405 | orchestrator | { 2026-02-14 04:40:10.296415 | orchestrator | "type": "v2", 2026-02-14 04:40:10.296426 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-14 04:40:10.296436 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296449 | orchestrator | }, 2026-02-14 04:40:10.296461 | orchestrator | { 2026-02-14 04:40:10.296473 | orchestrator | "type": "v1", 2026-02-14 04:40:10.296486 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-14 04:40:10.296498 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296510 | orchestrator | } 2026-02-14 04:40:10.296522 | orchestrator | ] 2026-02-14 04:40:10.296534 | orchestrator | }, 2026-02-14 04:40:10.296546 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-14 04:40:10.296558 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-14 04:40:10.296570 | orchestrator | "priority": 0, 2026-02-14 04:40:10.296582 | orchestrator | "weight": 0, 2026-02-14 04:40:10.296595 | orchestrator | "crush_location": "{}" 2026-02-14 04:40:10.296607 | orchestrator | }, 2026-02-14 04:40:10.296655 | orchestrator | { 2026-02-14 04:40:10.296670 | orchestrator | "rank": 2, 2026-02-14 04:40:10.296682 | orchestrator | "name": "testbed-node-2", 2026-02-14 04:40:10.296695 | orchestrator | "public_addrs": { 2026-02-14 04:40:10.296708 | orchestrator | "addrvec": [ 2026-02-14 04:40:10.296720 | orchestrator | { 2026-02-14 04:40:10.296732 | orchestrator | "type": "v2", 2026-02-14 04:40:10.296745 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-14 04:40:10.296758 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296770 | orchestrator | }, 2026-02-14 04:40:10.296782 | orchestrator | { 2026-02-14 04:40:10.296793 | orchestrator | "type": "v1", 2026-02-14 04:40:10.296812 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-14 04:40:10.296837 | orchestrator | "nonce": 0 2026-02-14 04:40:10.296863 | orchestrator | } 2026-02-14 04:40:10.296882 | orchestrator | ] 2026-02-14 04:40:10.296899 | orchestrator | }, 2026-02-14 04:40:10.296917 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-14 04:40:10.296935 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-14 04:40:10.296953 | orchestrator | "priority": 0, 2026-02-14 04:40:10.296985 | orchestrator | "weight": 0, 2026-02-14 04:40:10.297005 | orchestrator | "crush_location": "{}" 2026-02-14 04:40:10.297023 | orchestrator | } 2026-02-14 04:40:10.297043 | orchestrator | ] 2026-02-14 04:40:10.297062 | orchestrator | } 2026-02-14 04:40:10.297081 | orchestrator | } 2026-02-14 04:40:10.297114 | orchestrator | 2026-02-14 04:40:10.297135 | orchestrator | # Ceph free space status 2026-02-14 04:40:10.297154 | orchestrator | 2026-02-14 04:40:10.297174 | orchestrator | + echo 2026-02-14 04:40:10.297194 | orchestrator | + echo '# Ceph free space status' 2026-02-14 04:40:10.297213 | orchestrator | + echo 2026-02-14 04:40:10.297228 | orchestrator | + ceph df 2026-02-14 04:40:10.860011 | orchestrator | --- RAW STORAGE --- 2026-02-14 04:40:10.860102 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-14 04:40:10.860129 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-14 04:40:10.860152 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-14 04:40:10.860165 | orchestrator | 2026-02-14 04:40:10.860177 | orchestrator | --- POOLS --- 2026-02-14 04:40:10.860189 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-14 04:40:10.860202 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-14 04:40:10.860214 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-14 04:40:10.860225 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-14 04:40:10.860236 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-14 04:40:10.860248 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-14 04:40:10.860260 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-14 04:40:10.860272 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-14 04:40:10.860283 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-14 04:40:10.860294 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-14 04:40:10.860305 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-14 04:40:10.860317 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-14 04:40:10.860328 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.93 35 GiB 2026-02-14 04:40:10.860339 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-14 04:40:10.860351 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-14 04:40:10.901798 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-14 04:40:10.962563 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-14 04:40:10.962717 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-14 04:40:10.962734 | orchestrator | + osism apply facts 2026-02-14 04:40:12.972258 | orchestrator | 2026-02-14 04:40:12 | INFO  | Task c240560b-9d0f-444d-82ca-c2777dff55ff (facts) was prepared for execution. 2026-02-14 04:40:12.972368 | orchestrator | 2026-02-14 04:40:12 | INFO  | It takes a moment until task c240560b-9d0f-444d-82ca-c2777dff55ff (facts) has been started and output is visible here. 2026-02-14 04:40:28.117978 | orchestrator | 2026-02-14 04:40:28.118114 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-14 04:40:28.118128 | orchestrator | 2026-02-14 04:40:28.118138 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-14 04:40:28.118146 | orchestrator | Saturday 14 February 2026 04:40:17 +0000 (0:00:00.323) 0:00:00.323 ***** 2026-02-14 04:40:28.118152 | orchestrator | ok: [testbed-manager] 2026-02-14 04:40:28.118160 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:40:28.118165 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:40:28.118172 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:40:28.118184 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:40:28.118191 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:40:28.118199 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:40:28.118206 | orchestrator | 2026-02-14 04:40:28.118213 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-14 04:40:28.118244 | orchestrator | Saturday 14 February 2026 04:40:19 +0000 (0:00:01.327) 0:00:01.651 ***** 2026-02-14 04:40:28.118252 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:40:28.118260 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:40:28.118267 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:40:28.118273 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:40:28.118280 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:40:28.118286 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:40:28.118292 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:40:28.118298 | orchestrator | 2026-02-14 04:40:28.118305 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 04:40:28.118312 | orchestrator | 2026-02-14 04:40:28.118318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 04:40:28.118325 | orchestrator | Saturday 14 February 2026 04:40:20 +0000 (0:00:01.404) 0:00:03.056 ***** 2026-02-14 04:40:28.118332 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:40:28.118339 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:40:28.118346 | orchestrator | ok: [testbed-manager] 2026-02-14 04:40:28.118352 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:40:28.118359 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:40:28.118366 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:40:28.118373 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:40:28.118380 | orchestrator | 2026-02-14 04:40:28.118387 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-14 04:40:28.118393 | orchestrator | 2026-02-14 04:40:28.118399 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-14 04:40:28.118406 | orchestrator | Saturday 14 February 2026 04:40:27 +0000 (0:00:06.541) 0:00:09.597 ***** 2026-02-14 04:40:28.118413 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:40:28.118420 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:40:28.118427 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:40:28.118434 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:40:28.118440 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:40:28.118447 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:40:28.118454 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:40:28.118460 | orchestrator | 2026-02-14 04:40:28.118467 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:40:28.118474 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118482 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118489 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118509 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118516 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118523 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118530 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:40:28.118536 | orchestrator | 2026-02-14 04:40:28.118544 | orchestrator | 2026-02-14 04:40:28.118552 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:40:28.118560 | orchestrator | Saturday 14 February 2026 04:40:27 +0000 (0:00:00.631) 0:00:10.229 ***** 2026-02-14 04:40:28.118569 | orchestrator | =============================================================================== 2026-02-14 04:40:28.118578 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.54s 2026-02-14 04:40:28.118592 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2026-02-14 04:40:28.118601 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2026-02-14 04:40:28.118631 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-14 04:40:28.454703 | orchestrator | + osism validate ceph-mons 2026-02-14 04:41:00.782062 | orchestrator | 2026-02-14 04:41:00.782160 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-14 04:41:00.782171 | orchestrator | 2026-02-14 04:41:00.782178 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-14 04:41:00.782186 | orchestrator | Saturday 14 February 2026 04:40:45 +0000 (0:00:00.435) 0:00:00.435 ***** 2026-02-14 04:41:00.782193 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.782200 | orchestrator | 2026-02-14 04:41:00.782206 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-14 04:41:00.782212 | orchestrator | Saturday 14 February 2026 04:40:46 +0000 (0:00:00.830) 0:00:01.265 ***** 2026-02-14 04:41:00.782219 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.782225 | orchestrator | 2026-02-14 04:41:00.782231 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-14 04:41:00.782237 | orchestrator | Saturday 14 February 2026 04:40:47 +0000 (0:00:01.016) 0:00:02.282 ***** 2026-02-14 04:41:00.782244 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782251 | orchestrator | 2026-02-14 04:41:00.782258 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-14 04:41:00.782264 | orchestrator | Saturday 14 February 2026 04:40:47 +0000 (0:00:00.137) 0:00:02.419 ***** 2026-02-14 04:41:00.782270 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782276 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:00.782282 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:00.782289 | orchestrator | 2026-02-14 04:41:00.782295 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-14 04:41:00.782301 | orchestrator | Saturday 14 February 2026 04:40:47 +0000 (0:00:00.305) 0:00:02.724 ***** 2026-02-14 04:41:00.782307 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782313 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:00.782320 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:00.782326 | orchestrator | 2026-02-14 04:41:00.782332 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-14 04:41:00.782338 | orchestrator | Saturday 14 February 2026 04:40:48 +0000 (0:00:01.063) 0:00:03.787 ***** 2026-02-14 04:41:00.782344 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782351 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:41:00.782357 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:41:00.782363 | orchestrator | 2026-02-14 04:41:00.782370 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-14 04:41:00.782376 | orchestrator | Saturday 14 February 2026 04:40:48 +0000 (0:00:00.299) 0:00:04.087 ***** 2026-02-14 04:41:00.782382 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782389 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:00.782395 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:00.782401 | orchestrator | 2026-02-14 04:41:00.782407 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:41:00.782413 | orchestrator | Saturday 14 February 2026 04:40:49 +0000 (0:00:00.474) 0:00:04.562 ***** 2026-02-14 04:41:00.782419 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782426 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:00.782432 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:00.782438 | orchestrator | 2026-02-14 04:41:00.782444 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-14 04:41:00.782450 | orchestrator | Saturday 14 February 2026 04:40:49 +0000 (0:00:00.317) 0:00:04.880 ***** 2026-02-14 04:41:00.782457 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782483 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:41:00.782490 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:41:00.782496 | orchestrator | 2026-02-14 04:41:00.782502 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-14 04:41:00.782508 | orchestrator | Saturday 14 February 2026 04:40:49 +0000 (0:00:00.301) 0:00:05.182 ***** 2026-02-14 04:41:00.782514 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782521 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:00.782527 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:00.782533 | orchestrator | 2026-02-14 04:41:00.782539 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:41:00.782546 | orchestrator | Saturday 14 February 2026 04:40:50 +0000 (0:00:00.542) 0:00:05.724 ***** 2026-02-14 04:41:00.782552 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782558 | orchestrator | 2026-02-14 04:41:00.782564 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:41:00.782570 | orchestrator | Saturday 14 February 2026 04:40:50 +0000 (0:00:00.245) 0:00:05.970 ***** 2026-02-14 04:41:00.782578 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782585 | orchestrator | 2026-02-14 04:41:00.782614 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:41:00.782622 | orchestrator | Saturday 14 February 2026 04:40:50 +0000 (0:00:00.268) 0:00:06.239 ***** 2026-02-14 04:41:00.782629 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782636 | orchestrator | 2026-02-14 04:41:00.782644 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:00.782651 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.245) 0:00:06.484 ***** 2026-02-14 04:41:00.782658 | orchestrator | 2026-02-14 04:41:00.782666 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:00.782673 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.071) 0:00:06.556 ***** 2026-02-14 04:41:00.782680 | orchestrator | 2026-02-14 04:41:00.782687 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:00.782694 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.071) 0:00:06.627 ***** 2026-02-14 04:41:00.782701 | orchestrator | 2026-02-14 04:41:00.782709 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:41:00.782716 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.074) 0:00:06.702 ***** 2026-02-14 04:41:00.782723 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782729 | orchestrator | 2026-02-14 04:41:00.782736 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-14 04:41:00.782757 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.252) 0:00:06.955 ***** 2026-02-14 04:41:00.782763 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782770 | orchestrator | 2026-02-14 04:41:00.782789 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-14 04:41:00.782796 | orchestrator | Saturday 14 February 2026 04:40:51 +0000 (0:00:00.241) 0:00:07.196 ***** 2026-02-14 04:41:00.782802 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782809 | orchestrator | 2026-02-14 04:41:00.782815 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-14 04:41:00.782821 | orchestrator | Saturday 14 February 2026 04:40:52 +0000 (0:00:00.124) 0:00:07.321 ***** 2026-02-14 04:41:00.782827 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:41:00.782837 | orchestrator | 2026-02-14 04:41:00.782843 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-14 04:41:00.782850 | orchestrator | Saturday 14 February 2026 04:40:53 +0000 (0:00:01.549) 0:00:08.870 ***** 2026-02-14 04:41:00.782856 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782862 | orchestrator | 2026-02-14 04:41:00.782868 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-14 04:41:00.782874 | orchestrator | Saturday 14 February 2026 04:40:54 +0000 (0:00:00.483) 0:00:09.353 ***** 2026-02-14 04:41:00.782881 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782895 | orchestrator | 2026-02-14 04:41:00.782902 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-14 04:41:00.782908 | orchestrator | Saturday 14 February 2026 04:40:54 +0000 (0:00:00.129) 0:00:09.483 ***** 2026-02-14 04:41:00.782914 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782920 | orchestrator | 2026-02-14 04:41:00.782926 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-14 04:41:00.782933 | orchestrator | Saturday 14 February 2026 04:40:54 +0000 (0:00:00.326) 0:00:09.809 ***** 2026-02-14 04:41:00.782939 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.782945 | orchestrator | 2026-02-14 04:41:00.782960 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-14 04:41:00.782975 | orchestrator | Saturday 14 February 2026 04:40:54 +0000 (0:00:00.305) 0:00:10.115 ***** 2026-02-14 04:41:00.782981 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.782988 | orchestrator | 2026-02-14 04:41:00.782994 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-14 04:41:00.783000 | orchestrator | Saturday 14 February 2026 04:40:54 +0000 (0:00:00.132) 0:00:10.247 ***** 2026-02-14 04:41:00.783006 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.783012 | orchestrator | 2026-02-14 04:41:00.783019 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-14 04:41:00.783025 | orchestrator | Saturday 14 February 2026 04:40:55 +0000 (0:00:00.135) 0:00:10.383 ***** 2026-02-14 04:41:00.783031 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.783037 | orchestrator | 2026-02-14 04:41:00.783043 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-14 04:41:00.783049 | orchestrator | Saturday 14 February 2026 04:40:55 +0000 (0:00:00.139) 0:00:10.522 ***** 2026-02-14 04:41:00.783056 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:41:00.783062 | orchestrator | 2026-02-14 04:41:00.783068 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-14 04:41:00.783074 | orchestrator | Saturday 14 February 2026 04:40:56 +0000 (0:00:01.340) 0:00:11.863 ***** 2026-02-14 04:41:00.783080 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.783087 | orchestrator | 2026-02-14 04:41:00.783093 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-14 04:41:00.783099 | orchestrator | Saturday 14 February 2026 04:40:56 +0000 (0:00:00.299) 0:00:12.163 ***** 2026-02-14 04:41:00.783105 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.783111 | orchestrator | 2026-02-14 04:41:00.783117 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-14 04:41:00.783124 | orchestrator | Saturday 14 February 2026 04:40:57 +0000 (0:00:00.140) 0:00:12.303 ***** 2026-02-14 04:41:00.783130 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:00.783136 | orchestrator | 2026-02-14 04:41:00.783142 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-14 04:41:00.783148 | orchestrator | Saturday 14 February 2026 04:40:57 +0000 (0:00:00.142) 0:00:12.446 ***** 2026-02-14 04:41:00.783154 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.783161 | orchestrator | 2026-02-14 04:41:00.783167 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-14 04:41:00.783173 | orchestrator | Saturday 14 February 2026 04:40:57 +0000 (0:00:00.126) 0:00:12.572 ***** 2026-02-14 04:41:00.783183 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.783189 | orchestrator | 2026-02-14 04:41:00.783196 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-14 04:41:00.783202 | orchestrator | Saturday 14 February 2026 04:40:57 +0000 (0:00:00.342) 0:00:12.915 ***** 2026-02-14 04:41:00.783208 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.783214 | orchestrator | 2026-02-14 04:41:00.783220 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-14 04:41:00.783227 | orchestrator | Saturday 14 February 2026 04:40:57 +0000 (0:00:00.250) 0:00:13.165 ***** 2026-02-14 04:41:00.783237 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:00.783243 | orchestrator | 2026-02-14 04:41:00.783250 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:41:00.783256 | orchestrator | Saturday 14 February 2026 04:40:58 +0000 (0:00:00.267) 0:00:13.432 ***** 2026-02-14 04:41:00.783262 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.783268 | orchestrator | 2026-02-14 04:41:00.783275 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:41:00.783281 | orchestrator | Saturday 14 February 2026 04:40:59 +0000 (0:00:01.814) 0:00:15.246 ***** 2026-02-14 04:41:00.783287 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.783293 | orchestrator | 2026-02-14 04:41:00.783299 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:41:00.783305 | orchestrator | Saturday 14 February 2026 04:41:00 +0000 (0:00:00.271) 0:00:15.518 ***** 2026-02-14 04:41:00.783312 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:00.783318 | orchestrator | 2026-02-14 04:41:00.783329 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:03.470012 | orchestrator | Saturday 14 February 2026 04:41:00 +0000 (0:00:00.288) 0:00:15.806 ***** 2026-02-14 04:41:03.470171 | orchestrator | 2026-02-14 04:41:03.470187 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:03.470201 | orchestrator | Saturday 14 February 2026 04:41:00 +0000 (0:00:00.077) 0:00:15.884 ***** 2026-02-14 04:41:03.470212 | orchestrator | 2026-02-14 04:41:03.470224 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:03.470236 | orchestrator | Saturday 14 February 2026 04:41:00 +0000 (0:00:00.070) 0:00:15.955 ***** 2026-02-14 04:41:03.470247 | orchestrator | 2026-02-14 04:41:03.470257 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-14 04:41:03.470268 | orchestrator | Saturday 14 February 2026 04:41:00 +0000 (0:00:00.074) 0:00:16.029 ***** 2026-02-14 04:41:03.470280 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:03.470291 | orchestrator | 2026-02-14 04:41:03.470302 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:41:03.470312 | orchestrator | Saturday 14 February 2026 04:41:02 +0000 (0:00:01.546) 0:00:17.576 ***** 2026-02-14 04:41:03.470323 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-14 04:41:03.470334 | orchestrator |  "msg": [ 2026-02-14 04:41:03.470348 | orchestrator |  "Validator run completed.", 2026-02-14 04:41:03.470359 | orchestrator |  "You can find the report file here:", 2026-02-14 04:41:03.470370 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-14T04:40:45+00:00-report.json", 2026-02-14 04:41:03.470382 | orchestrator |  "on the following host:", 2026-02-14 04:41:03.470393 | orchestrator |  "testbed-manager" 2026-02-14 04:41:03.470404 | orchestrator |  ] 2026-02-14 04:41:03.470415 | orchestrator | } 2026-02-14 04:41:03.470427 | orchestrator | 2026-02-14 04:41:03.470438 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:41:03.470450 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-14 04:41:03.470462 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:41:03.470473 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:41:03.470484 | orchestrator | 2026-02-14 04:41:03.470495 | orchestrator | 2026-02-14 04:41:03.470506 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:41:03.470517 | orchestrator | Saturday 14 February 2026 04:41:03 +0000 (0:00:00.821) 0:00:18.398 ***** 2026-02-14 04:41:03.470556 | orchestrator | =============================================================================== 2026-02-14 04:41:03.470571 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-02-14 04:41:03.470583 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2026-02-14 04:41:03.470625 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-02-14 04:41:03.470638 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2026-02-14 04:41:03.470650 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-02-14 04:41:03.470662 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-02-14 04:41:03.470675 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-02-14 04:41:03.470687 | orchestrator | Print report file information ------------------------------------------- 0.82s 2026-02-14 04:41:03.470700 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.54s 2026-02-14 04:41:03.470712 | orchestrator | Set quorum test data ---------------------------------------------------- 0.48s 2026-02-14 04:41:03.470739 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-02-14 04:41:03.470752 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-02-14 04:41:03.470765 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-02-14 04:41:03.470777 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-02-14 04:41:03.470790 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-14 04:41:03.470802 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-14 04:41:03.470815 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2026-02-14 04:41:03.470828 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-02-14 04:41:03.470840 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-02-14 04:41:03.470853 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-02-14 04:41:03.769409 | orchestrator | + osism validate ceph-mgrs 2026-02-14 04:41:34.660953 | orchestrator | 2026-02-14 04:41:34.661054 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-14 04:41:34.661066 | orchestrator | 2026-02-14 04:41:34.661074 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-14 04:41:34.661089 | orchestrator | Saturday 14 February 2026 04:41:20 +0000 (0:00:00.440) 0:00:00.440 ***** 2026-02-14 04:41:34.661097 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661103 | orchestrator | 2026-02-14 04:41:34.661110 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-14 04:41:34.661116 | orchestrator | Saturday 14 February 2026 04:41:21 +0000 (0:00:00.806) 0:00:01.247 ***** 2026-02-14 04:41:34.661123 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661129 | orchestrator | 2026-02-14 04:41:34.661136 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-14 04:41:34.661142 | orchestrator | Saturday 14 February 2026 04:41:22 +0000 (0:00:00.949) 0:00:02.196 ***** 2026-02-14 04:41:34.661148 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661164 | orchestrator | 2026-02-14 04:41:34.661171 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-14 04:41:34.661177 | orchestrator | Saturday 14 February 2026 04:41:22 +0000 (0:00:00.146) 0:00:02.343 ***** 2026-02-14 04:41:34.661183 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661190 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:34.661196 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:34.661202 | orchestrator | 2026-02-14 04:41:34.661208 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-14 04:41:34.661215 | orchestrator | Saturday 14 February 2026 04:41:22 +0000 (0:00:00.334) 0:00:02.677 ***** 2026-02-14 04:41:34.661240 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661247 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:34.661253 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:34.661259 | orchestrator | 2026-02-14 04:41:34.661265 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-14 04:41:34.661272 | orchestrator | Saturday 14 February 2026 04:41:23 +0000 (0:00:00.977) 0:00:03.655 ***** 2026-02-14 04:41:34.661278 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661284 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:41:34.661290 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:41:34.661296 | orchestrator | 2026-02-14 04:41:34.661302 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-14 04:41:34.661309 | orchestrator | Saturday 14 February 2026 04:41:23 +0000 (0:00:00.310) 0:00:03.965 ***** 2026-02-14 04:41:34.661316 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661322 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:34.661328 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:34.661334 | orchestrator | 2026-02-14 04:41:34.661340 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:41:34.661347 | orchestrator | Saturday 14 February 2026 04:41:24 +0000 (0:00:00.537) 0:00:04.503 ***** 2026-02-14 04:41:34.661353 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661359 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:34.661365 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:34.661371 | orchestrator | 2026-02-14 04:41:34.661377 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-14 04:41:34.661383 | orchestrator | Saturday 14 February 2026 04:41:24 +0000 (0:00:00.315) 0:00:04.819 ***** 2026-02-14 04:41:34.661390 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661396 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:41:34.661402 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:41:34.661408 | orchestrator | 2026-02-14 04:41:34.661414 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-14 04:41:34.661420 | orchestrator | Saturday 14 February 2026 04:41:24 +0000 (0:00:00.286) 0:00:05.105 ***** 2026-02-14 04:41:34.661427 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661433 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:41:34.661439 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:41:34.661445 | orchestrator | 2026-02-14 04:41:34.661451 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:41:34.661457 | orchestrator | Saturday 14 February 2026 04:41:25 +0000 (0:00:00.508) 0:00:05.614 ***** 2026-02-14 04:41:34.661463 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661470 | orchestrator | 2026-02-14 04:41:34.661476 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:41:34.661482 | orchestrator | Saturday 14 February 2026 04:41:25 +0000 (0:00:00.255) 0:00:05.869 ***** 2026-02-14 04:41:34.661489 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661496 | orchestrator | 2026-02-14 04:41:34.661504 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:41:34.661511 | orchestrator | Saturday 14 February 2026 04:41:25 +0000 (0:00:00.264) 0:00:06.134 ***** 2026-02-14 04:41:34.661518 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661525 | orchestrator | 2026-02-14 04:41:34.661532 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.661539 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.254) 0:00:06.388 ***** 2026-02-14 04:41:34.661546 | orchestrator | 2026-02-14 04:41:34.661554 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.661560 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.069) 0:00:06.458 ***** 2026-02-14 04:41:34.661566 | orchestrator | 2026-02-14 04:41:34.661594 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.661601 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.074) 0:00:06.532 ***** 2026-02-14 04:41:34.661619 | orchestrator | 2026-02-14 04:41:34.661625 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:41:34.661632 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.076) 0:00:06.608 ***** 2026-02-14 04:41:34.661638 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661644 | orchestrator | 2026-02-14 04:41:34.661651 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-14 04:41:34.661657 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.258) 0:00:06.866 ***** 2026-02-14 04:41:34.661663 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661670 | orchestrator | 2026-02-14 04:41:34.661688 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-14 04:41:34.661694 | orchestrator | Saturday 14 February 2026 04:41:26 +0000 (0:00:00.249) 0:00:07.116 ***** 2026-02-14 04:41:34.661701 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661707 | orchestrator | 2026-02-14 04:41:34.661713 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-14 04:41:34.661720 | orchestrator | Saturday 14 February 2026 04:41:27 +0000 (0:00:00.135) 0:00:07.252 ***** 2026-02-14 04:41:34.661726 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:41:34.661732 | orchestrator | 2026-02-14 04:41:34.661739 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-14 04:41:34.661745 | orchestrator | Saturday 14 February 2026 04:41:29 +0000 (0:00:01.990) 0:00:09.242 ***** 2026-02-14 04:41:34.661751 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661757 | orchestrator | 2026-02-14 04:41:34.661777 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-14 04:41:34.661784 | orchestrator | Saturday 14 February 2026 04:41:29 +0000 (0:00:00.470) 0:00:09.713 ***** 2026-02-14 04:41:34.661790 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661796 | orchestrator | 2026-02-14 04:41:34.661802 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-14 04:41:34.661809 | orchestrator | Saturday 14 February 2026 04:41:29 +0000 (0:00:00.344) 0:00:10.057 ***** 2026-02-14 04:41:34.661815 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661821 | orchestrator | 2026-02-14 04:41:34.661828 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-14 04:41:34.661834 | orchestrator | Saturday 14 February 2026 04:41:30 +0000 (0:00:00.158) 0:00:10.216 ***** 2026-02-14 04:41:34.661840 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:41:34.661846 | orchestrator | 2026-02-14 04:41:34.661853 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-14 04:41:34.661859 | orchestrator | Saturday 14 February 2026 04:41:30 +0000 (0:00:00.148) 0:00:10.364 ***** 2026-02-14 04:41:34.661865 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661871 | orchestrator | 2026-02-14 04:41:34.661877 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-14 04:41:34.661884 | orchestrator | Saturday 14 February 2026 04:41:30 +0000 (0:00:00.265) 0:00:10.630 ***** 2026-02-14 04:41:34.661890 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:41:34.661896 | orchestrator | 2026-02-14 04:41:34.661902 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:41:34.661909 | orchestrator | Saturday 14 February 2026 04:41:30 +0000 (0:00:00.240) 0:00:10.870 ***** 2026-02-14 04:41:34.661915 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661921 | orchestrator | 2026-02-14 04:41:34.661928 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:41:34.661934 | orchestrator | Saturday 14 February 2026 04:41:31 +0000 (0:00:01.259) 0:00:12.130 ***** 2026-02-14 04:41:34.661940 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661946 | orchestrator | 2026-02-14 04:41:34.661952 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:41:34.661959 | orchestrator | Saturday 14 February 2026 04:41:32 +0000 (0:00:00.267) 0:00:12.397 ***** 2026-02-14 04:41:34.661971 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.661977 | orchestrator | 2026-02-14 04:41:34.661983 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.661989 | orchestrator | Saturday 14 February 2026 04:41:32 +0000 (0:00:00.259) 0:00:12.656 ***** 2026-02-14 04:41:34.661996 | orchestrator | 2026-02-14 04:41:34.662002 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.662008 | orchestrator | Saturday 14 February 2026 04:41:32 +0000 (0:00:00.069) 0:00:12.726 ***** 2026-02-14 04:41:34.662045 | orchestrator | 2026-02-14 04:41:34.662053 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:41:34.662060 | orchestrator | Saturday 14 February 2026 04:41:32 +0000 (0:00:00.070) 0:00:12.796 ***** 2026-02-14 04:41:34.662066 | orchestrator | 2026-02-14 04:41:34.662073 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-14 04:41:34.662079 | orchestrator | Saturday 14 February 2026 04:41:32 +0000 (0:00:00.267) 0:00:13.063 ***** 2026-02-14 04:41:34.662085 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:34.662091 | orchestrator | 2026-02-14 04:41:34.662098 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:41:34.662104 | orchestrator | Saturday 14 February 2026 04:41:34 +0000 (0:00:01.316) 0:00:14.380 ***** 2026-02-14 04:41:34.662110 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-14 04:41:34.662116 | orchestrator |  "msg": [ 2026-02-14 04:41:34.662123 | orchestrator |  "Validator run completed.", 2026-02-14 04:41:34.662133 | orchestrator |  "You can find the report file here:", 2026-02-14 04:41:34.662140 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-14T04:41:20+00:00-report.json", 2026-02-14 04:41:34.662147 | orchestrator |  "on the following host:", 2026-02-14 04:41:34.662153 | orchestrator |  "testbed-manager" 2026-02-14 04:41:34.662159 | orchestrator |  ] 2026-02-14 04:41:34.662166 | orchestrator | } 2026-02-14 04:41:34.662172 | orchestrator | 2026-02-14 04:41:34.662179 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:41:34.662284 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-14 04:41:34.662318 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:41:34.662360 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:41:34.986611 | orchestrator | 2026-02-14 04:41:34.986680 | orchestrator | 2026-02-14 04:41:34.986687 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:41:34.986693 | orchestrator | Saturday 14 February 2026 04:41:34 +0000 (0:00:00.396) 0:00:14.777 ***** 2026-02-14 04:41:34.986698 | orchestrator | =============================================================================== 2026-02-14 04:41:34.986702 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-02-14 04:41:34.986706 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2026-02-14 04:41:34.986710 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2026-02-14 04:41:34.986714 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2026-02-14 04:41:34.986718 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2026-02-14 04:41:34.986721 | orchestrator | Get timestamp for report file ------------------------------------------- 0.81s 2026-02-14 04:41:34.986725 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-02-14 04:41:34.986729 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.51s 2026-02-14 04:41:34.986759 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.47s 2026-02-14 04:41:34.986763 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-02-14 04:41:34.986767 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-02-14 04:41:34.986771 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-02-14 04:41:34.986774 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-02-14 04:41:34.986778 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-02-14 04:41:34.986782 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-14 04:41:34.986785 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2026-02-14 04:41:34.986789 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-14 04:41:34.986793 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2026-02-14 04:41:34.986797 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-02-14 04:41:34.986800 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-02-14 04:41:35.303079 | orchestrator | + osism validate ceph-osds 2026-02-14 04:41:56.635636 | orchestrator | 2026-02-14 04:41:56.635770 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-14 04:41:56.635789 | orchestrator | 2026-02-14 04:41:56.635801 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-14 04:41:56.635813 | orchestrator | Saturday 14 February 2026 04:41:51 +0000 (0:00:00.423) 0:00:00.423 ***** 2026-02-14 04:41:56.635836 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:56.635848 | orchestrator | 2026-02-14 04:41:56.635859 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-14 04:41:56.635870 | orchestrator | Saturday 14 February 2026 04:41:52 +0000 (0:00:00.816) 0:00:01.240 ***** 2026-02-14 04:41:56.635882 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:56.635893 | orchestrator | 2026-02-14 04:41:56.635903 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-14 04:41:56.635915 | orchestrator | Saturday 14 February 2026 04:41:53 +0000 (0:00:00.522) 0:00:01.763 ***** 2026-02-14 04:41:56.635925 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:41:56.635936 | orchestrator | 2026-02-14 04:41:56.635947 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-14 04:41:56.635958 | orchestrator | Saturday 14 February 2026 04:41:54 +0000 (0:00:00.740) 0:00:02.504 ***** 2026-02-14 04:41:56.635969 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:41:56.635983 | orchestrator | 2026-02-14 04:41:56.635994 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-14 04:41:56.636005 | orchestrator | Saturday 14 February 2026 04:41:54 +0000 (0:00:00.140) 0:00:02.644 ***** 2026-02-14 04:41:56.636016 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:41:56.636028 | orchestrator | 2026-02-14 04:41:56.636039 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-14 04:41:56.636049 | orchestrator | Saturday 14 February 2026 04:41:54 +0000 (0:00:00.140) 0:00:02.785 ***** 2026-02-14 04:41:56.636060 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:41:56.636072 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:41:56.636085 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:41:56.636097 | orchestrator | 2026-02-14 04:41:56.636126 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-14 04:41:56.636140 | orchestrator | Saturday 14 February 2026 04:41:54 +0000 (0:00:00.329) 0:00:03.114 ***** 2026-02-14 04:41:56.636152 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:41:56.636164 | orchestrator | 2026-02-14 04:41:56.636177 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-14 04:41:56.636213 | orchestrator | Saturday 14 February 2026 04:41:54 +0000 (0:00:00.144) 0:00:03.258 ***** 2026-02-14 04:41:56.636227 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:41:56.636239 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:41:56.636251 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:41:56.636263 | orchestrator | 2026-02-14 04:41:56.636276 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-14 04:41:56.636289 | orchestrator | Saturday 14 February 2026 04:41:55 +0000 (0:00:00.308) 0:00:03.567 ***** 2026-02-14 04:41:56.636302 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:41:56.636314 | orchestrator | 2026-02-14 04:41:56.636327 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:41:56.636340 | orchestrator | Saturday 14 February 2026 04:41:55 +0000 (0:00:00.845) 0:00:04.412 ***** 2026-02-14 04:41:56.636352 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:41:56.636364 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:41:56.636377 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:41:56.636389 | orchestrator | 2026-02-14 04:41:56.636402 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-14 04:41:56.636414 | orchestrator | Saturday 14 February 2026 04:41:56 +0000 (0:00:00.332) 0:00:04.745 ***** 2026-02-14 04:41:56.636441 | orchestrator | skipping: [testbed-node-3] => (item={'id': '92e9198d6eddb34323f7ff80c97e5629bda97b5d9a8cdca0ff15d2bfa671b9b3', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-14 04:41:56.636459 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f3058d65434f2320d66a399580ac1c5b530c565745d16b2e96cd9b6bfd611241', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.636472 | orchestrator | skipping: [testbed-node-3] => (item={'id': '94681f7cccef3c563c6ee9c2045dc29ab7b12794afa52b0d0655ab5c3c5dd605', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.636484 | orchestrator | skipping: [testbed-node-3] => (item={'id': '739331487de2f84c3d93f18403e8f6745387a12a3bcee030e75f0fe8dbb9d3c5', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-14 04:41:56.636495 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e9f55118886f525a4668889a3d8a144e1a9e7a65ee919072a9ea957ec973b2f8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-14 04:41:56.636544 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7718cd1d3a3c4921bcf75f902f8a320f11b6115ca567837be873c280fd6f483e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-14 04:41:56.636582 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0b7f1bc668744380a7572ef161c8d97e483222497902dced588754af0a121785', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-14 04:41:56.636595 | orchestrator | skipping: [testbed-node-3] => (item={'id': '73e93536901bc4c466a489bf0e599427374a0582f80920c77c29a0879e5c0f86', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-14 04:41:56.636606 | orchestrator | skipping: [testbed-node-3] => (item={'id': '01769a2ff99072bdc7dc010e1218d6da7186f15bcb01819ec1e8df4ef64148c7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.636628 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b85344e2b66b1b6ec6e8bc4c3aef83d30ab04a43d8a5363fba49e556c9ea2648', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.636640 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1bc8e1a6a2f3a89ac96b3fdfa53784f771bd2309d261d7c8a471f24933ef8f52', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.636652 | orchestrator | ok: [testbed-node-3] => (item={'id': '52b12e4f19c1286ad042ef4f6863c6f6049c1215e826c809a4c342e9fa54a312', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:41:56.636664 | orchestrator | ok: [testbed-node-3] => (item={'id': '95cd40a36da35ca0bf093d819665598c9045aaa55f7debf7c7da9da265e04ba6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:41:56.636675 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8cedd2be6e586723ef69ea1bb0b5f1fae5fce796d8a931300221b59533f2a42', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.636686 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5362400e06ba4d78b67e42e35900750ce7b7bae42ae87f4888782e0a60af1f3d', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:41:56.636697 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e299026069242bf23e967bf1e0e402891cfb45a7351a7ec9ea3c2bacf9de902', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:41:56.636709 | orchestrator | skipping: [testbed-node-3] => (item={'id': '950d76b606e7e6f0760199f1fa613d9f01e5d860461fe0c4f374de4cb64bd021', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.636720 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97cf96993fa0001a4c25b38dce55eda36c2df305fe94f513a82961c11de09e61', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.636731 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2094174532ab92820028d6e6b2be52c65aedf6ee4166fd92f7dbed587b3d525', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.636743 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1c8b104c0b948b25bb492eed781f5382a50a01ff9afd85e9a1b61364031626cd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-14 04:41:56.636762 | orchestrator | skipping: [testbed-node-4] => (item={'id': '61a51679a1a75972b6684eae8dee2d7cd0e6bd3b3db73ddd9f6b159f3eaff2be', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.892298 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd3ea641d472928c6db2860df7fb60412021047d002aeb7d54e77bbb6617f8d72', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.892462 | orchestrator | skipping: [testbed-node-4] => (item={'id': '44b4eff19aeab6dc7ef43d11cdae87e0d1c4385e9827419987dff24dabdb3962', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-14 04:41:56.892500 | orchestrator | skipping: [testbed-node-4] => (item={'id': '435a1a21f62cef9a0549ad624a5af967dba36ae49697b74ef68be78768383da6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-14 04:41:56.892515 | orchestrator | skipping: [testbed-node-4] => (item={'id': '781b05b023d549fff0804537fedfece5ce4de9b962020a45b51b246c3cc71b7d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-14 04:41:56.892532 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9abb9860f0f9b8746ef4575d4fcb2c0bba006230e8cf2e419e3cdcdb068253d9', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-14 04:41:56.892544 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4055d29b99864895817c16856daad37b94f443c695df96017b7087f14d833f0f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-14 04:41:56.892555 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7213ea11439106229291e64172a7cccd49bf8409eccd1934bd54ab665b6fb84b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892609 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd7896bce8264388ec502fd2ff7473aeafe41ae9fe50abdf89f95b61b7112d26', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892621 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a3ca7a140034f2552e1ce437e178ae75376c060afe64c804f4d992301200867', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892633 | orchestrator | ok: [testbed-node-4] => (item={'id': 'bb7da883a0832eaaef2303f9bdc9f69897a4dbbbdb04110ed83bd58d628f712b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:41:56.892645 | orchestrator | ok: [testbed-node-4] => (item={'id': '6b6b578687cb228d260d5395e4f58ab518c6e54983439cb0f0454238ec6bf54c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:41:56.892656 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79882dee05eebb8e161ba2019840ab72a128b1622bcd681df4a6966e22794d8c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892667 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2eba990c27950060823808930da1e7ef2e190e6a321c56c5e08b8284218cacd7', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:41:56.892678 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2665d8dbccabdb2e22d7cbf9ced5e1a043d382a678747db194cbd2507d28ce1e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:41:56.892710 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd4bde8e930a8c41ace2eb81fded14e814cc2efe892aedf9490a9b12438f17ab9', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.892729 | orchestrator | skipping: [testbed-node-4] => (item={'id': '91161799c8697f90eb1ae1cae101d0c06b945d0d9c03b5d1f654d4d75424099e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.892741 | orchestrator | skipping: [testbed-node-4] => (item={'id': '801dfb0d55d4cdd1785da4d1442acf639f84c6aeb40202138adc0ddb80216847', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:41:56.892752 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3321bd35944f1eade948789959a63cd99b0c2422d52fe70f39ac18e260a7923', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-14 04:41:56.892763 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4114387e67a91d4c65a8b27f47f325405ebc5fff0eb1ed0336ad13862b81db14', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.892779 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cdc71f9576161446acd061983b3d3a87cc65e62b712d307d7d70a4a82daf6245', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-14 04:41:56.892791 | orchestrator | skipping: [testbed-node-5] => (item={'id': '832c1a974dbe5bfa926f7d6c2ba7dabf90600f73d793281820c1fb8e458308f9', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-14 04:41:56.892802 | orchestrator | skipping: [testbed-node-5] => (item={'id': '627a99d66b271c4ae42c70adb8121c7f4cf54009e90fca28b711ee5dbfe955e9', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-14 04:41:56.892813 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fd30565c2bf8530168d16b990b147095ed565dfbbd686189b5812a44a2249f85', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-14 04:41:56.892824 | orchestrator | skipping: [testbed-node-5] => (item={'id': '212a246968db2e3972a156991515bc3e4dab86cfabcd38449d71c30b05f4a0de', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-14 04:41:56.892835 | orchestrator | skipping: [testbed-node-5] => (item={'id': '438ba8c7437754520e3e4f65bb333b0d195b71a24f6f4199bcb1e6e3fdfe9dd6', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-14 04:41:56.892846 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cdeeffffe31ebc6e84b3d9ea32dab33472323723f6a0f8ed9a3d290c882cb678', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892858 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e6aa4f59765c11fef535debec7c0e87fba006f9be68f68af7252a6216d70d034', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892869 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a4d8a616313d706936b03645a9448e46f5a9acfe0870e3aa9d591c6fb5ae4e27', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:41:56.892887 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b12bed65ec3650358c3994e1d5392b151e882df8e5a68efa8eb071cf39149e98', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:41:56.892906 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ed788ed73be8a57e4a64c749db88835389e2215f9a58fa834b1f5e0d6deef70d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-14 04:42:08.371282 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2956377ee72cd605f8b88fafae33eafc68c3c6c8905f772af2bcb5e87d28f4a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-14 04:42:08.371393 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6baa109c48d9a3deaa84f9a3e2df21fe935a507affcc7944a790c3eadadefda1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:42:08.371411 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf59a91446fd685e4cc660a9bac578d7b070fdf32bd587841b73fc3a0759280a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-14 04:42:08.371425 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2b61dc2ee6c7600ae713bd34a733789b116a0e6e497d3c98ab7d74bdd64866de', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:42:08.371454 | orchestrator | skipping: [testbed-node-5] => (item={'id': '77d46c9a5c3e366404b818f27fc263eeff789b4c70b23883e10f1dfaa2a9bbe0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:42:08.371467 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c70f201511759ce997dea357287ee7607d43d00e4d97b37f3a395a4c16c70682', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-14 04:42:08.371478 | orchestrator | 2026-02-14 04:42:08.371491 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-14 04:42:08.371504 | orchestrator | Saturday 14 February 2026 04:41:56 +0000 (0:00:00.586) 0:00:05.331 ***** 2026-02-14 04:42:08.371515 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.371527 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.371538 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.371549 | orchestrator | 2026-02-14 04:42:08.371646 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-14 04:42:08.371659 | orchestrator | Saturday 14 February 2026 04:41:57 +0000 (0:00:00.338) 0:00:05.669 ***** 2026-02-14 04:42:08.371670 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.371682 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:08.371693 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:08.371704 | orchestrator | 2026-02-14 04:42:08.371715 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-14 04:42:08.371726 | orchestrator | Saturday 14 February 2026 04:41:57 +0000 (0:00:00.483) 0:00:06.153 ***** 2026-02-14 04:42:08.371737 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.371748 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.371759 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.371770 | orchestrator | 2026-02-14 04:42:08.371781 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:42:08.371792 | orchestrator | Saturday 14 February 2026 04:41:58 +0000 (0:00:00.342) 0:00:06.496 ***** 2026-02-14 04:42:08.371803 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.371814 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.371827 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.371864 | orchestrator | 2026-02-14 04:42:08.371877 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-14 04:42:08.371890 | orchestrator | Saturday 14 February 2026 04:41:58 +0000 (0:00:00.301) 0:00:06.797 ***** 2026-02-14 04:42:08.371902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-14 04:42:08.371917 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-14 04:42:08.371929 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.371942 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-14 04:42:08.371955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-14 04:42:08.371967 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:08.371979 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-14 04:42:08.371992 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-14 04:42:08.372005 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:08.372017 | orchestrator | 2026-02-14 04:42:08.372030 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-14 04:42:08.372042 | orchestrator | Saturday 14 February 2026 04:41:58 +0000 (0:00:00.363) 0:00:07.161 ***** 2026-02-14 04:42:08.372054 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372067 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.372079 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.372091 | orchestrator | 2026-02-14 04:42:08.372104 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-14 04:42:08.372116 | orchestrator | Saturday 14 February 2026 04:41:59 +0000 (0:00:00.523) 0:00:07.685 ***** 2026-02-14 04:42:08.372128 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372160 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:08.372174 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:08.372186 | orchestrator | 2026-02-14 04:42:08.372197 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-14 04:42:08.372208 | orchestrator | Saturday 14 February 2026 04:41:59 +0000 (0:00:00.318) 0:00:08.003 ***** 2026-02-14 04:42:08.372219 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372229 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:08.372240 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:08.372251 | orchestrator | 2026-02-14 04:42:08.372262 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-14 04:42:08.372272 | orchestrator | Saturday 14 February 2026 04:41:59 +0000 (0:00:00.302) 0:00:08.306 ***** 2026-02-14 04:42:08.372283 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372294 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.372304 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.372315 | orchestrator | 2026-02-14 04:42:08.372326 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:42:08.372336 | orchestrator | Saturday 14 February 2026 04:42:00 +0000 (0:00:00.295) 0:00:08.602 ***** 2026-02-14 04:42:08.372347 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372358 | orchestrator | 2026-02-14 04:42:08.372368 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:42:08.372379 | orchestrator | Saturday 14 February 2026 04:42:00 +0000 (0:00:00.772) 0:00:09.375 ***** 2026-02-14 04:42:08.372390 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372400 | orchestrator | 2026-02-14 04:42:08.372411 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:42:08.372422 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.250) 0:00:09.625 ***** 2026-02-14 04:42:08.372432 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372443 | orchestrator | 2026-02-14 04:42:08.372454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:08.372473 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.263) 0:00:09.889 ***** 2026-02-14 04:42:08.372484 | orchestrator | 2026-02-14 04:42:08.372494 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:08.372505 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.069) 0:00:09.958 ***** 2026-02-14 04:42:08.372516 | orchestrator | 2026-02-14 04:42:08.372527 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:08.372538 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.069) 0:00:10.028 ***** 2026-02-14 04:42:08.372549 | orchestrator | 2026-02-14 04:42:08.372584 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:42:08.372596 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.072) 0:00:10.100 ***** 2026-02-14 04:42:08.372606 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372617 | orchestrator | 2026-02-14 04:42:08.372628 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-14 04:42:08.372639 | orchestrator | Saturday 14 February 2026 04:42:01 +0000 (0:00:00.247) 0:00:10.348 ***** 2026-02-14 04:42:08.372650 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.372660 | orchestrator | 2026-02-14 04:42:08.372671 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:42:08.372682 | orchestrator | Saturday 14 February 2026 04:42:02 +0000 (0:00:00.248) 0:00:10.597 ***** 2026-02-14 04:42:08.372693 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372704 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.372715 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.372725 | orchestrator | 2026-02-14 04:42:08.372736 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-14 04:42:08.372747 | orchestrator | Saturday 14 February 2026 04:42:02 +0000 (0:00:00.306) 0:00:10.903 ***** 2026-02-14 04:42:08.372758 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372769 | orchestrator | 2026-02-14 04:42:08.372780 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-14 04:42:08.372791 | orchestrator | Saturday 14 February 2026 04:42:03 +0000 (0:00:00.644) 0:00:11.547 ***** 2026-02-14 04:42:08.372802 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 04:42:08.372812 | orchestrator | 2026-02-14 04:42:08.372823 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-14 04:42:08.372836 | orchestrator | Saturday 14 February 2026 04:42:04 +0000 (0:00:01.577) 0:00:13.124 ***** 2026-02-14 04:42:08.372854 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372871 | orchestrator | 2026-02-14 04:42:08.372889 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-14 04:42:08.372907 | orchestrator | Saturday 14 February 2026 04:42:04 +0000 (0:00:00.135) 0:00:13.260 ***** 2026-02-14 04:42:08.372926 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.372944 | orchestrator | 2026-02-14 04:42:08.372962 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-14 04:42:08.372980 | orchestrator | Saturday 14 February 2026 04:42:05 +0000 (0:00:00.312) 0:00:13.573 ***** 2026-02-14 04:42:08.372998 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:08.373016 | orchestrator | 2026-02-14 04:42:08.373034 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-14 04:42:08.373052 | orchestrator | Saturday 14 February 2026 04:42:05 +0000 (0:00:00.132) 0:00:13.705 ***** 2026-02-14 04:42:08.373071 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.373090 | orchestrator | 2026-02-14 04:42:08.373108 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:42:08.373126 | orchestrator | Saturday 14 February 2026 04:42:05 +0000 (0:00:00.149) 0:00:13.855 ***** 2026-02-14 04:42:08.373146 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:08.373164 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:08.373182 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:08.373217 | orchestrator | 2026-02-14 04:42:08.373237 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-14 04:42:08.373250 | orchestrator | Saturday 14 February 2026 04:42:05 +0000 (0:00:00.296) 0:00:14.151 ***** 2026-02-14 04:42:08.373260 | orchestrator | changed: [testbed-node-3] 2026-02-14 04:42:08.373271 | orchestrator | changed: [testbed-node-4] 2026-02-14 04:42:08.373282 | orchestrator | changed: [testbed-node-5] 2026-02-14 04:42:18.609757 | orchestrator | 2026-02-14 04:42:18.609849 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-14 04:42:18.609860 | orchestrator | Saturday 14 February 2026 04:42:08 +0000 (0:00:02.652) 0:00:16.804 ***** 2026-02-14 04:42:18.609868 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.609878 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.609883 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.609887 | orchestrator | 2026-02-14 04:42:18.609892 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-14 04:42:18.609897 | orchestrator | Saturday 14 February 2026 04:42:08 +0000 (0:00:00.329) 0:00:17.133 ***** 2026-02-14 04:42:18.609902 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.609907 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.609912 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.609917 | orchestrator | 2026-02-14 04:42:18.609922 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-14 04:42:18.609926 | orchestrator | Saturday 14 February 2026 04:42:09 +0000 (0:00:00.496) 0:00:17.629 ***** 2026-02-14 04:42:18.609931 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:18.609936 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:18.609941 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:18.609946 | orchestrator | 2026-02-14 04:42:18.609950 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-14 04:42:18.609955 | orchestrator | Saturday 14 February 2026 04:42:09 +0000 (0:00:00.317) 0:00:17.946 ***** 2026-02-14 04:42:18.609960 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.609964 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.609969 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.609973 | orchestrator | 2026-02-14 04:42:18.609978 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-14 04:42:18.609985 | orchestrator | Saturday 14 February 2026 04:42:10 +0000 (0:00:00.521) 0:00:18.468 ***** 2026-02-14 04:42:18.609990 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:18.609994 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:18.609999 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:18.610004 | orchestrator | 2026-02-14 04:42:18.610008 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-14 04:42:18.610051 | orchestrator | Saturday 14 February 2026 04:42:10 +0000 (0:00:00.312) 0:00:18.781 ***** 2026-02-14 04:42:18.610056 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:18.610061 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:18.610065 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:18.610070 | orchestrator | 2026-02-14 04:42:18.610075 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-14 04:42:18.610080 | orchestrator | Saturday 14 February 2026 04:42:10 +0000 (0:00:00.339) 0:00:19.120 ***** 2026-02-14 04:42:18.610084 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.610089 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.610093 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.610098 | orchestrator | 2026-02-14 04:42:18.610103 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-14 04:42:18.610107 | orchestrator | Saturday 14 February 2026 04:42:11 +0000 (0:00:00.500) 0:00:19.621 ***** 2026-02-14 04:42:18.610112 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.610117 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.610121 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.610126 | orchestrator | 2026-02-14 04:42:18.610130 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-14 04:42:18.610150 | orchestrator | Saturday 14 February 2026 04:42:11 +0000 (0:00:00.764) 0:00:20.385 ***** 2026-02-14 04:42:18.610155 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.610160 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.610164 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.610169 | orchestrator | 2026-02-14 04:42:18.610173 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-14 04:42:18.610178 | orchestrator | Saturday 14 February 2026 04:42:12 +0000 (0:00:00.334) 0:00:20.720 ***** 2026-02-14 04:42:18.610182 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:18.610187 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:42:18.610192 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:42:18.610196 | orchestrator | 2026-02-14 04:42:18.610201 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-14 04:42:18.610205 | orchestrator | Saturday 14 February 2026 04:42:12 +0000 (0:00:00.308) 0:00:21.028 ***** 2026-02-14 04:42:18.610210 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:42:18.610215 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:42:18.610219 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:42:18.610224 | orchestrator | 2026-02-14 04:42:18.610228 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-14 04:42:18.610233 | orchestrator | Saturday 14 February 2026 04:42:13 +0000 (0:00:00.540) 0:00:21.568 ***** 2026-02-14 04:42:18.610238 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:42:18.610242 | orchestrator | 2026-02-14 04:42:18.610247 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-14 04:42:18.610252 | orchestrator | Saturday 14 February 2026 04:42:13 +0000 (0:00:00.265) 0:00:21.834 ***** 2026-02-14 04:42:18.610256 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:42:18.610261 | orchestrator | 2026-02-14 04:42:18.610265 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-14 04:42:18.610270 | orchestrator | Saturday 14 February 2026 04:42:13 +0000 (0:00:00.277) 0:00:22.111 ***** 2026-02-14 04:42:18.610275 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:42:18.610279 | orchestrator | 2026-02-14 04:42:18.610284 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-14 04:42:18.610288 | orchestrator | Saturday 14 February 2026 04:42:15 +0000 (0:00:01.705) 0:00:23.816 ***** 2026-02-14 04:42:18.610293 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:42:18.610298 | orchestrator | 2026-02-14 04:42:18.610303 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-14 04:42:18.610307 | orchestrator | Saturday 14 February 2026 04:42:15 +0000 (0:00:00.280) 0:00:24.097 ***** 2026-02-14 04:42:18.610312 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:42:18.610317 | orchestrator | 2026-02-14 04:42:18.610333 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:18.610339 | orchestrator | Saturday 14 February 2026 04:42:15 +0000 (0:00:00.274) 0:00:24.371 ***** 2026-02-14 04:42:18.610344 | orchestrator | 2026-02-14 04:42:18.610350 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:18.610355 | orchestrator | Saturday 14 February 2026 04:42:15 +0000 (0:00:00.071) 0:00:24.443 ***** 2026-02-14 04:42:18.610360 | orchestrator | 2026-02-14 04:42:18.610365 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-14 04:42:18.610370 | orchestrator | Saturday 14 February 2026 04:42:16 +0000 (0:00:00.069) 0:00:24.512 ***** 2026-02-14 04:42:18.610376 | orchestrator | 2026-02-14 04:42:18.610381 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-14 04:42:18.610386 | orchestrator | Saturday 14 February 2026 04:42:16 +0000 (0:00:00.075) 0:00:24.588 ***** 2026-02-14 04:42:18.610391 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-14 04:42:18.610397 | orchestrator | 2026-02-14 04:42:18.610402 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-14 04:42:18.610412 | orchestrator | Saturday 14 February 2026 04:42:17 +0000 (0:00:01.541) 0:00:26.130 ***** 2026-02-14 04:42:18.610417 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-14 04:42:18.610423 | orchestrator |  "msg": [ 2026-02-14 04:42:18.610429 | orchestrator |  "Validator run completed.", 2026-02-14 04:42:18.610434 | orchestrator |  "You can find the report file here:", 2026-02-14 04:42:18.610440 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-14T04:41:52+00:00-report.json", 2026-02-14 04:42:18.610449 | orchestrator |  "on the following host:", 2026-02-14 04:42:18.610455 | orchestrator |  "testbed-manager" 2026-02-14 04:42:18.610460 | orchestrator |  ] 2026-02-14 04:42:18.610466 | orchestrator | } 2026-02-14 04:42:18.610471 | orchestrator | 2026-02-14 04:42:18.610477 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:42:18.610483 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 04:42:18.610489 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-14 04:42:18.610495 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-14 04:42:18.610500 | orchestrator | 2026-02-14 04:42:18.610506 | orchestrator | 2026-02-14 04:42:18.610511 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:42:18.610516 | orchestrator | Saturday 14 February 2026 04:42:18 +0000 (0:00:00.608) 0:00:26.739 ***** 2026-02-14 04:42:18.610522 | orchestrator | =============================================================================== 2026-02-14 04:42:18.610527 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.65s 2026-02-14 04:42:18.610532 | orchestrator | Aggregate test results step one ----------------------------------------- 1.71s 2026-02-14 04:42:18.610537 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2026-02-14 04:42:18.610542 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-02-14 04:42:18.610566 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.85s 2026-02-14 04:42:18.610572 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-02-14 04:42:18.610577 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2026-02-14 04:42:18.610582 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2026-02-14 04:42:18.610587 | orchestrator | Create report output directory ------------------------------------------ 0.74s 2026-02-14 04:42:18.610592 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.64s 2026-02-14 04:42:18.610598 | orchestrator | Print report file information ------------------------------------------- 0.61s 2026-02-14 04:42:18.610603 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.59s 2026-02-14 04:42:18.610608 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2026-02-14 04:42:18.610613 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-02-14 04:42:18.610618 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.52s 2026-02-14 04:42:18.610623 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2026-02-14 04:42:18.610628 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-02-14 04:42:18.610633 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2026-02-14 04:42:18.610638 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2026-02-14 04:42:18.610644 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.36s 2026-02-14 04:42:18.917474 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-14 04:42:18.925132 | orchestrator | + set -e 2026-02-14 04:42:18.925192 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 04:42:18.927034 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 04:42:18.927071 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 04:42:18.927082 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 04:42:18.927093 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 04:42:18.927104 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 04:42:18.927117 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 04:42:18.927128 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:42:18.927139 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:42:18.927149 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 04:42:18.927160 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 04:42:18.927171 | orchestrator | ++ export ARA=false 2026-02-14 04:42:18.927181 | orchestrator | ++ ARA=false 2026-02-14 04:42:18.927192 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 04:42:18.927203 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 04:42:18.927213 | orchestrator | ++ export TEMPEST=false 2026-02-14 04:42:18.927224 | orchestrator | ++ TEMPEST=false 2026-02-14 04:42:18.927234 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 04:42:18.927244 | orchestrator | ++ IS_ZUUL=true 2026-02-14 04:42:18.927255 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:42:18.927266 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:42:18.927276 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 04:42:18.927287 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 04:42:18.927297 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 04:42:18.927308 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 04:42:18.927319 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 04:42:18.927329 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 04:42:18.927340 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 04:42:18.927350 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 04:42:18.927361 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-14 04:42:18.927371 | orchestrator | + source /etc/os-release 2026-02-14 04:42:18.927381 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-14 04:42:18.927392 | orchestrator | ++ NAME=Ubuntu 2026-02-14 04:42:18.927403 | orchestrator | ++ VERSION_ID=24.04 2026-02-14 04:42:18.927413 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-14 04:42:18.927424 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-14 04:42:18.927434 | orchestrator | ++ ID=ubuntu 2026-02-14 04:42:18.927445 | orchestrator | ++ ID_LIKE=debian 2026-02-14 04:42:18.927455 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-14 04:42:18.927466 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-14 04:42:18.927476 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-14 04:42:18.927487 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-14 04:42:18.927499 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-14 04:42:18.927509 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-14 04:42:18.927520 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-14 04:42:18.927531 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-14 04:42:18.927543 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-14 04:42:18.953906 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-14 04:42:41.172439 | orchestrator | 2026-02-14 04:42:41.172636 | orchestrator | # Status of Elasticsearch 2026-02-14 04:42:41.172664 | orchestrator | 2026-02-14 04:42:41.172682 | orchestrator | + pushd /opt/configuration/contrib 2026-02-14 04:42:41.172701 | orchestrator | + echo 2026-02-14 04:42:41.172717 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-14 04:42:41.172731 | orchestrator | + echo 2026-02-14 04:42:41.172745 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-14 04:42:41.383388 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-14 04:42:41.383456 | orchestrator | 2026-02-14 04:42:41.383463 | orchestrator | # Status of MariaDB 2026-02-14 04:42:41.383469 | orchestrator | + echo 2026-02-14 04:42:41.383474 | orchestrator | + echo '# Status of MariaDB' 2026-02-14 04:42:41.383499 | orchestrator | 2026-02-14 04:42:41.383504 | orchestrator | + echo 2026-02-14 04:42:41.384047 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-14 04:42:41.436766 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 04:42:41.436851 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-14 04:42:41.436865 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-14 04:42:41.436877 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-14 04:42:41.495077 | orchestrator | Reading package lists... 2026-02-14 04:42:41.837524 | orchestrator | Building dependency tree... 2026-02-14 04:42:41.837899 | orchestrator | Reading state information... 2026-02-14 04:42:42.224058 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-14 04:42:42.224145 | orchestrator | bc set to manually installed. 2026-02-14 04:42:42.224156 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-14 04:42:42.901085 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-14 04:42:42.901694 | orchestrator | 2026-02-14 04:42:42.901747 | orchestrator | # Status of Prometheus 2026-02-14 04:42:42.901769 | orchestrator | 2026-02-14 04:42:42.901787 | orchestrator | + echo 2026-02-14 04:42:42.901807 | orchestrator | + echo '# Status of Prometheus' 2026-02-14 04:42:42.901825 | orchestrator | + echo 2026-02-14 04:42:42.901844 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-14 04:42:42.966213 | orchestrator | Unauthorized 2026-02-14 04:42:42.972526 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-14 04:42:43.040306 | orchestrator | Unauthorized 2026-02-14 04:42:43.043373 | orchestrator | 2026-02-14 04:42:43.043438 | orchestrator | # Status of RabbitMQ 2026-02-14 04:42:43.043461 | orchestrator | 2026-02-14 04:42:43.043481 | orchestrator | + echo 2026-02-14 04:42:43.043502 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-14 04:42:43.043522 | orchestrator | + echo 2026-02-14 04:42:43.043998 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-14 04:42:43.089493 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 04:42:43.089612 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-14 04:42:43.089628 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-14 04:42:43.564439 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-14 04:42:43.573305 | orchestrator | 2026-02-14 04:42:43.573351 | orchestrator | # Status of Redis 2026-02-14 04:42:43.573362 | orchestrator | 2026-02-14 04:42:43.573371 | orchestrator | + echo 2026-02-14 04:42:43.573379 | orchestrator | + echo '# Status of Redis' 2026-02-14 04:42:43.573388 | orchestrator | + echo 2026-02-14 04:42:43.573398 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-14 04:42:43.581295 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002691s;;;0.000000;10.000000 2026-02-14 04:42:43.581320 | orchestrator | 2026-02-14 04:42:43.581329 | orchestrator | # Create backup of MariaDB database 2026-02-14 04:42:43.581339 | orchestrator | 2026-02-14 04:42:43.581348 | orchestrator | + popd 2026-02-14 04:42:43.581357 | orchestrator | + echo 2026-02-14 04:42:43.581366 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-14 04:42:43.581375 | orchestrator | + echo 2026-02-14 04:42:43.581384 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-14 04:42:45.640961 | orchestrator | 2026-02-14 04:42:45 | INFO  | Task 0f9bd1ac-d2ea-498f-a134-4e6ececed0a0 (mariadb_backup) was prepared for execution. 2026-02-14 04:42:45.641061 | orchestrator | 2026-02-14 04:42:45 | INFO  | It takes a moment until task 0f9bd1ac-d2ea-498f-a134-4e6ececed0a0 (mariadb_backup) has been started and output is visible here. 2026-02-14 04:45:30.324829 | orchestrator | 2026-02-14 04:45:30.324940 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 04:45:30.324955 | orchestrator | 2026-02-14 04:45:30.324966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 04:45:30.324977 | orchestrator | Saturday 14 February 2026 04:42:49 +0000 (0:00:00.187) 0:00:00.187 ***** 2026-02-14 04:45:30.324987 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:45:30.324998 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:45:30.325009 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:45:30.325019 | orchestrator | 2026-02-14 04:45:30.325029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 04:45:30.325065 | orchestrator | Saturday 14 February 2026 04:42:50 +0000 (0:00:00.361) 0:00:00.548 ***** 2026-02-14 04:45:30.325076 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-14 04:45:30.325087 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-14 04:45:30.325097 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-14 04:45:30.325107 | orchestrator | 2026-02-14 04:45:30.325117 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-14 04:45:30.325127 | orchestrator | 2026-02-14 04:45:30.325137 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-14 04:45:30.325147 | orchestrator | Saturday 14 February 2026 04:42:50 +0000 (0:00:00.621) 0:00:01.170 ***** 2026-02-14 04:45:30.325157 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 04:45:30.325180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 04:45:30.325190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 04:45:30.325200 | orchestrator | 2026-02-14 04:45:30.325209 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 04:45:30.325219 | orchestrator | Saturday 14 February 2026 04:42:51 +0000 (0:00:00.429) 0:00:01.599 ***** 2026-02-14 04:45:30.325230 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:45:30.325241 | orchestrator | 2026-02-14 04:45:30.325251 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-14 04:45:30.325273 | orchestrator | Saturday 14 February 2026 04:42:51 +0000 (0:00:00.557) 0:00:02.157 ***** 2026-02-14 04:45:30.325283 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:45:30.325293 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:45:30.325302 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:45:30.325312 | orchestrator | 2026-02-14 04:45:30.325321 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-14 04:45:30.325331 | orchestrator | Saturday 14 February 2026 04:42:55 +0000 (0:00:03.276) 0:00:05.434 ***** 2026-02-14 04:45:30.325341 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:45:30.325351 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:45:30.325361 | orchestrator | 2026-02-14 04:45:30.325373 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-14 04:45:30.325384 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-14 04:45:30.325395 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-14 04:45:30.325406 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-14 04:45:30.325418 | orchestrator | mariadb_bootstrap_restart 2026-02-14 04:45:30.325429 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:45:30.325440 | orchestrator | 2026-02-14 04:45:30.325451 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-14 04:45:30.325462 | orchestrator | skipping: no hosts matched 2026-02-14 04:45:30.325473 | orchestrator | 2026-02-14 04:45:30.325501 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-14 04:45:30.325513 | orchestrator | skipping: no hosts matched 2026-02-14 04:45:30.325524 | orchestrator | 2026-02-14 04:45:30.325535 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-14 04:45:30.325546 | orchestrator | skipping: no hosts matched 2026-02-14 04:45:30.325557 | orchestrator | 2026-02-14 04:45:30.325567 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-14 04:45:30.325579 | orchestrator | 2026-02-14 04:45:30.325590 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-14 04:45:30.325601 | orchestrator | Saturday 14 February 2026 04:45:29 +0000 (0:02:34.112) 0:02:39.546 ***** 2026-02-14 04:45:30.325611 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:45:30.325622 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:45:30.325641 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:45:30.325652 | orchestrator | 2026-02-14 04:45:30.325663 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-14 04:45:30.325674 | orchestrator | Saturday 14 February 2026 04:45:29 +0000 (0:00:00.305) 0:02:39.852 ***** 2026-02-14 04:45:30.325684 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:45:30.325695 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:45:30.325707 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:45:30.325718 | orchestrator | 2026-02-14 04:45:30.325729 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:45:30.325739 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:45:30.325750 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 04:45:30.325760 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 04:45:30.325770 | orchestrator | 2026-02-14 04:45:30.325780 | orchestrator | 2026-02-14 04:45:30.325789 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:45:30.325799 | orchestrator | Saturday 14 February 2026 04:45:29 +0000 (0:00:00.471) 0:02:40.323 ***** 2026-02-14 04:45:30.325808 | orchestrator | =============================================================================== 2026-02-14 04:45:30.325842 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 154.11s 2026-02-14 04:45:30.325860 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.28s 2026-02-14 04:45:30.325875 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2026-02-14 04:45:30.325901 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-14 04:45:30.325921 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.47s 2026-02-14 04:45:30.325936 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-02-14 04:45:30.325952 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-14 04:45:30.325968 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-02-14 04:45:30.662210 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-14 04:45:30.673532 | orchestrator | + set -e 2026-02-14 04:45:30.673621 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:45:30.674352 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:45:30.674377 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:45:30.674450 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:45:30.674465 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:45:30.674510 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-14 04:45:30.676826 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:45:30.685759 | orchestrator | 2026-02-14 04:45:30.685806 | orchestrator | # OpenStack endpoints 2026-02-14 04:45:30.685813 | orchestrator | 2026-02-14 04:45:30.685818 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:45:30.685823 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:45:30.685829 | orchestrator | + export OS_CLOUD=admin 2026-02-14 04:45:30.685833 | orchestrator | + OS_CLOUD=admin 2026-02-14 04:45:30.685839 | orchestrator | + echo 2026-02-14 04:45:30.685847 | orchestrator | + echo '# OpenStack endpoints' 2026-02-14 04:45:30.685854 | orchestrator | + echo 2026-02-14 04:45:30.685862 | orchestrator | + openstack endpoint list 2026-02-14 04:45:33.823050 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-14 04:45:33.823151 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-14 04:45:33.823166 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-14 04:45:33.823201 | orchestrator | | 0396cc603d924b01896ef7dbb2e8331e | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-14 04:45:33.823230 | orchestrator | | 0509abb9928b413496f6dbba3627b0fb | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-14 04:45:33.823242 | orchestrator | | 089a19c0ad0c4336ae028996024eec2e | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-14 04:45:33.823253 | orchestrator | | 160afc5205d341fda762f7563c58bdc2 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-14 04:45:33.823264 | orchestrator | | 2073f9b19a584bffa6e5fbc21531614e | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-14 04:45:33.823275 | orchestrator | | 2edacfbf6dfa425e8c7d2f893668682d | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-14 04:45:33.823286 | orchestrator | | 37c8ba14875e49ce80bc1925561e2d32 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-14 04:45:33.823296 | orchestrator | | 3ae89a0ffe744c3aaefdaa549474d982 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-14 04:45:33.823307 | orchestrator | | 66bc74d505594f3a97234c960a70a395 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-14 04:45:33.823318 | orchestrator | | 6879650a89104cd6b2139d7302cc5006 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-14 04:45:33.823328 | orchestrator | | 81f25d1d6561497497cc8495a576f462 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-14 04:45:33.823339 | orchestrator | | 84e9a65a56ed4359862a7a0ceeec1759 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-14 04:45:33.823350 | orchestrator | | 8866a54a1bbf496891fdd2329b0c536d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-14 04:45:33.823361 | orchestrator | | 8db6089a9f5542298c9d60957034e76d | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-14 04:45:33.823371 | orchestrator | | 9322d5f32b08422fbaa9fad83344d71b | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-14 04:45:33.823382 | orchestrator | | 986cd189a6224f368e56cb61a80a704e | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-14 04:45:33.823392 | orchestrator | | 9ce53338c6444b4fbdaee4c565273356 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-14 04:45:33.823403 | orchestrator | | a23bb2232ebf4bfc860d863610617f99 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-14 04:45:33.823414 | orchestrator | | a2741fad8ea84cd1b575f4aeee8503e4 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-14 04:45:33.823424 | orchestrator | | b6c04fc801904886b963b195b9e1efbf | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-14 04:45:33.823459 | orchestrator | | bbe28be4ab70457fa19b17f2e08af458 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-14 04:45:33.823472 | orchestrator | | bc4153c185eb476e9b577f5e8a8a7f5a | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-14 04:45:33.823563 | orchestrator | | bf22103e5fa548b6bcc54866de2c4335 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-14 04:45:33.823576 | orchestrator | | c4c8d5bf521f40c0bd713bff244d3786 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-14 04:45:33.823588 | orchestrator | | d6b7cfa00c1b4c81a8f910a3f55f2371 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-14 04:45:33.823600 | orchestrator | | e680909d56174dd88f09284ef55c9977 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-14 04:45:33.823612 | orchestrator | | f5aafc8e252f4dbe9f800ec525319641 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-14 04:45:33.823625 | orchestrator | | f7e8c81563894b2596f5dbf1bbe8905a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-14 04:45:33.823637 | orchestrator | | fb03c9443aaa4b5eb7127c1f8622af4b | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-14 04:45:33.823649 | orchestrator | | fd4c130ff6354b5ca3d2dea28d8a66fb | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-14 04:45:33.823661 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-14 04:45:34.074807 | orchestrator | 2026-02-14 04:45:34.074916 | orchestrator | # Cinder 2026-02-14 04:45:34.074940 | orchestrator | 2026-02-14 04:45:34.074960 | orchestrator | + echo 2026-02-14 04:45:34.074979 | orchestrator | + echo '# Cinder' 2026-02-14 04:45:34.074999 | orchestrator | + echo 2026-02-14 04:45:34.075018 | orchestrator | + openstack volume service list 2026-02-14 04:45:36.660980 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:36.661088 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-14 04:45:36.661103 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:36.661115 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-14T04:45:31.000000 | 2026-02-14 04:45:36.661127 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-14T04:45:31.000000 | 2026-02-14 04:45:36.661139 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-14T04:45:31.000000 | 2026-02-14 04:45:36.661150 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-14T04:45:30.000000 | 2026-02-14 04:45:36.661161 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-14T04:45:28.000000 | 2026-02-14 04:45:36.661172 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-14T04:45:28.000000 | 2026-02-14 04:45:36.661183 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-14T04:45:36.000000 | 2026-02-14 04:45:36.661194 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-14T04:45:27.000000 | 2026-02-14 04:45:36.661227 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-14T04:45:28.000000 | 2026-02-14 04:45:36.661239 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:36.919582 | orchestrator | 2026-02-14 04:45:36.919704 | orchestrator | # Neutron 2026-02-14 04:45:36.919727 | orchestrator | 2026-02-14 04:45:36.919745 | orchestrator | + echo 2026-02-14 04:45:36.919763 | orchestrator | + echo '# Neutron' 2026-02-14 04:45:36.919783 | orchestrator | + echo 2026-02-14 04:45:36.919800 | orchestrator | + openstack network agent list 2026-02-14 04:45:39.606941 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-14 04:45:39.607047 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-14 04:45:39.607062 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-14 04:45:39.607074 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607085 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607096 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607126 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607138 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607149 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-14 04:45:39.607160 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-14 04:45:39.607171 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-14 04:45:39.607182 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-14 04:45:39.607193 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-14 04:45:39.869047 | orchestrator | + openstack network service provider list 2026-02-14 04:45:42.413353 | orchestrator | +---------------+------+---------+ 2026-02-14 04:45:42.413455 | orchestrator | | Service Type | Name | Default | 2026-02-14 04:45:42.413469 | orchestrator | +---------------+------+---------+ 2026-02-14 04:45:42.413540 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-14 04:45:42.413556 | orchestrator | +---------------+------+---------+ 2026-02-14 04:45:42.676612 | orchestrator | 2026-02-14 04:45:42.676699 | orchestrator | + echo 2026-02-14 04:45:42.676712 | orchestrator | + echo '# Nova' 2026-02-14 04:45:42.676721 | orchestrator | # Nova 2026-02-14 04:45:42.676730 | orchestrator | 2026-02-14 04:45:42.676739 | orchestrator | + echo 2026-02-14 04:45:42.676748 | orchestrator | + openstack compute service list 2026-02-14 04:45:45.295668 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:45.295807 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-14 04:45:45.295824 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:45.295871 | orchestrator | | 876c7117-6cdd-4bca-843f-82b6b69d88d8 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-14T04:45:35.000000 | 2026-02-14 04:45:45.295883 | orchestrator | | 1ba56b00-1095-4fdf-a1f9-c38f3aa55d70 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-14T04:45:41.000000 | 2026-02-14 04:45:45.295894 | orchestrator | | 190278d5-da43-40f9-a2de-f9d0872d785d | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-14T04:45:41.000000 | 2026-02-14 04:45:45.295905 | orchestrator | | 2e057bea-10d4-477d-a7d9-d550cd1cb5e3 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-14T04:45:36.000000 | 2026-02-14 04:45:45.295916 | orchestrator | | 0e66af85-f8ac-4042-9358-91641eddd479 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-14T04:45:38.000000 | 2026-02-14 04:45:45.295927 | orchestrator | | 3d59623a-221a-4894-8f63-1a6eb03ccb92 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-14T04:45:38.000000 | 2026-02-14 04:45:45.295938 | orchestrator | | 5bf2322f-6a4d-4fd5-bb69-157adaa660dc | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-14T04:45:38.000000 | 2026-02-14 04:45:45.295948 | orchestrator | | ddfe6757-406c-4d0d-af8b-32501493e5a0 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-14T04:45:38.000000 | 2026-02-14 04:45:45.295959 | orchestrator | | e6111b4c-df98-44ec-9e4a-d9c5f508449b | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-14T04:45:38.000000 | 2026-02-14 04:45:45.295970 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-14 04:45:45.565905 | orchestrator | + openstack hypervisor list 2026-02-14 04:45:48.940458 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-14 04:45:48.940633 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-14 04:45:48.940650 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-14 04:45:48.940662 | orchestrator | | b166202d-c0c9-4033-86c6-4205ef9b4734 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-14 04:45:48.940673 | orchestrator | | a077d254-0cbd-4b87-9226-a8a3f2745836 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-14 04:45:48.940684 | orchestrator | | a8533dbe-a802-4553-969b-a7eb4404bbe9 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-14 04:45:48.940695 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-14 04:45:49.208189 | orchestrator | 2026-02-14 04:45:49.208284 | orchestrator | # Run OpenStack test play 2026-02-14 04:45:49.208395 | orchestrator | + echo 2026-02-14 04:45:49.208409 | orchestrator | + echo '# Run OpenStack test play' 2026-02-14 04:45:49.208420 | orchestrator | + echo 2026-02-14 04:45:49.208440 | orchestrator | 2026-02-14 04:45:49.208450 | orchestrator | + osism apply --environment openstack test 2026-02-14 04:45:51.235418 | orchestrator | 2026-02-14 04:45:51 | INFO  | Trying to run play test in environment openstack 2026-02-14 04:46:01.370195 | orchestrator | 2026-02-14 04:46:01 | INFO  | Task 40a4aca5-b632-44e8-bc59-83d008607083 (test) was prepared for execution. 2026-02-14 04:46:01.370307 | orchestrator | 2026-02-14 04:46:01 | INFO  | It takes a moment until task 40a4aca5-b632-44e8-bc59-83d008607083 (test) has been started and output is visible here. 2026-02-14 04:48:35.787236 | orchestrator | 2026-02-14 04:48:35.787387 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-14 04:48:35.787471 | orchestrator | 2026-02-14 04:48:35.787495 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-14 04:48:35.787516 | orchestrator | Saturday 14 February 2026 04:46:05 +0000 (0:00:00.071) 0:00:00.071 ***** 2026-02-14 04:48:35.787534 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.787556 | orchestrator | 2026-02-14 04:48:35.787574 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-14 04:48:35.787593 | orchestrator | Saturday 14 February 2026 04:46:09 +0000 (0:00:03.656) 0:00:03.728 ***** 2026-02-14 04:48:35.787648 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.787668 | orchestrator | 2026-02-14 04:48:35.787687 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-14 04:48:35.787705 | orchestrator | Saturday 14 February 2026 04:46:13 +0000 (0:00:04.177) 0:00:07.905 ***** 2026-02-14 04:48:35.787723 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.787742 | orchestrator | 2026-02-14 04:48:35.787761 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-14 04:48:35.787779 | orchestrator | Saturday 14 February 2026 04:46:19 +0000 (0:00:06.541) 0:00:14.446 ***** 2026-02-14 04:48:35.787797 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.787818 | orchestrator | 2026-02-14 04:48:35.787837 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-14 04:48:35.787855 | orchestrator | Saturday 14 February 2026 04:46:23 +0000 (0:00:04.027) 0:00:18.474 ***** 2026-02-14 04:48:35.787873 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.787892 | orchestrator | 2026-02-14 04:48:35.787912 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-14 04:48:35.787932 | orchestrator | Saturday 14 February 2026 04:46:28 +0000 (0:00:04.406) 0:00:22.880 ***** 2026-02-14 04:48:35.787943 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-14 04:48:35.787955 | orchestrator | changed: [localhost] => (item=member) 2026-02-14 04:48:35.787966 | orchestrator | changed: [localhost] => (item=creator) 2026-02-14 04:48:35.787977 | orchestrator | 2026-02-14 04:48:35.787988 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-14 04:48:35.787999 | orchestrator | Saturday 14 February 2026 04:46:40 +0000 (0:00:11.702) 0:00:34.582 ***** 2026-02-14 04:48:35.788010 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788021 | orchestrator | 2026-02-14 04:48:35.788032 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-14 04:48:35.788042 | orchestrator | Saturday 14 February 2026 04:46:44 +0000 (0:00:04.127) 0:00:38.710 ***** 2026-02-14 04:48:35.788053 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788064 | orchestrator | 2026-02-14 04:48:35.788075 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-14 04:48:35.788085 | orchestrator | Saturday 14 February 2026 04:46:48 +0000 (0:00:04.756) 0:00:43.466 ***** 2026-02-14 04:48:35.788096 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788107 | orchestrator | 2026-02-14 04:48:35.788117 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-14 04:48:35.788128 | orchestrator | Saturday 14 February 2026 04:46:53 +0000 (0:00:04.330) 0:00:47.797 ***** 2026-02-14 04:48:35.788139 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788150 | orchestrator | 2026-02-14 04:48:35.788161 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-14 04:48:35.788172 | orchestrator | Saturday 14 February 2026 04:46:57 +0000 (0:00:03.869) 0:00:51.666 ***** 2026-02-14 04:48:35.788182 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788193 | orchestrator | 2026-02-14 04:48:35.788204 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-14 04:48:35.788215 | orchestrator | Saturday 14 February 2026 04:47:01 +0000 (0:00:04.186) 0:00:55.853 ***** 2026-02-14 04:48:35.788225 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788236 | orchestrator | 2026-02-14 04:48:35.788247 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-14 04:48:35.788258 | orchestrator | Saturday 14 February 2026 04:47:05 +0000 (0:00:03.897) 0:00:59.751 ***** 2026-02-14 04:48:35.788269 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788280 | orchestrator | 2026-02-14 04:48:35.788291 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-14 04:48:35.788302 | orchestrator | Saturday 14 February 2026 04:47:10 +0000 (0:00:04.829) 0:01:04.581 ***** 2026-02-14 04:48:35.788313 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788324 | orchestrator | 2026-02-14 04:48:35.788335 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-14 04:48:35.788356 | orchestrator | Saturday 14 February 2026 04:47:15 +0000 (0:00:05.417) 0:01:09.998 ***** 2026-02-14 04:48:35.788367 | orchestrator | changed: [localhost] 2026-02-14 04:48:35.788378 | orchestrator | 2026-02-14 04:48:35.788389 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-14 04:48:35.788400 | orchestrator | 2026-02-14 04:48:35.788446 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-14 04:48:35.788458 | orchestrator | Saturday 14 February 2026 04:47:26 +0000 (0:00:11.082) 0:01:21.080 ***** 2026-02-14 04:48:35.788469 | orchestrator | ok: [localhost] 2026-02-14 04:48:35.788481 | orchestrator | 2026-02-14 04:48:35.788548 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-14 04:48:35.788561 | orchestrator | Saturday 14 February 2026 04:47:30 +0000 (0:00:03.589) 0:01:24.669 ***** 2026-02-14 04:48:35.788572 | orchestrator | skipping: [localhost] 2026-02-14 04:48:35.788583 | orchestrator | 2026-02-14 04:48:35.788594 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-14 04:48:35.788604 | orchestrator | Saturday 14 February 2026 04:47:30 +0000 (0:00:00.054) 0:01:24.724 ***** 2026-02-14 04:48:35.788615 | orchestrator | skipping: [localhost] 2026-02-14 04:48:35.788626 | orchestrator | 2026-02-14 04:48:35.788637 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-14 04:48:35.788662 | orchestrator | Saturday 14 February 2026 04:47:30 +0000 (0:00:00.055) 0:01:24.779 ***** 2026-02-14 04:48:35.788673 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-14 04:48:35.788685 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-14 04:48:35.788720 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-14 04:48:35.788732 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-14 04:48:35.788743 | orchestrator | skipping: [localhost] => (item=test)  2026-02-14 04:48:35.788754 | orchestrator | skipping: [localhost] 2026-02-14 04:48:35.788764 | orchestrator | 2026-02-14 04:48:35.788775 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-14 04:48:35.788786 | orchestrator | Saturday 14 February 2026 04:47:30 +0000 (0:00:00.191) 0:01:24.971 ***** 2026-02-14 04:48:35.788797 | orchestrator | skipping: [localhost] 2026-02-14 04:48:35.788808 | orchestrator | 2026-02-14 04:48:35.788818 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-14 04:48:35.788829 | orchestrator | Saturday 14 February 2026 04:47:30 +0000 (0:00:00.164) 0:01:25.136 ***** 2026-02-14 04:48:35.788840 | orchestrator | changed: [localhost] => (item=test) 2026-02-14 04:48:35.788851 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-14 04:48:35.788862 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-14 04:48:35.788873 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-14 04:48:35.788883 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-14 04:48:35.788894 | orchestrator | 2026-02-14 04:48:35.788905 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-14 04:48:35.788916 | orchestrator | Saturday 14 February 2026 04:47:35 +0000 (0:00:04.575) 0:01:29.711 ***** 2026-02-14 04:48:35.788926 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-14 04:48:35.788939 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-14 04:48:35.788950 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-14 04:48:35.788961 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-14 04:48:35.788974 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j846137518483.3710', 'results_file': '/ansible/.ansible_async/j846137518483.3710', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.788988 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j515700755468.3735', 'results_file': '/ansible/.ansible_async/j515700755468.3735', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789008 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j857934193734.3760', 'results_file': '/ansible/.ansible_async/j857934193734.3760', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789019 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j344793044842.3785', 'results_file': '/ansible/.ansible_async/j344793044842.3785', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789031 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j90988598175.3810', 'results_file': '/ansible/.ansible_async/j90988598175.3810', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789042 | orchestrator | 2026-02-14 04:48:35.789053 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-14 04:48:35.789063 | orchestrator | Saturday 14 February 2026 04:48:21 +0000 (0:00:46.646) 0:02:16.357 ***** 2026-02-14 04:48:35.789074 | orchestrator | changed: [localhost] => (item=test) 2026-02-14 04:48:35.789085 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-14 04:48:35.789096 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-14 04:48:35.789107 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-14 04:48:35.789118 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-14 04:48:35.789128 | orchestrator | 2026-02-14 04:48:35.789139 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-14 04:48:35.789150 | orchestrator | Saturday 14 February 2026 04:48:26 +0000 (0:00:04.452) 0:02:20.810 ***** 2026-02-14 04:48:35.789161 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-14 04:48:35.789173 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j805737901066.3907', 'results_file': '/ansible/.ansible_async/j805737901066.3907', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789184 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j530656472239.3932', 'results_file': '/ansible/.ansible_async/j530656472239.3932', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789195 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j962813179596.3957', 'results_file': '/ansible/.ansible_async/j962813179596.3957', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-14 04:48:35.789221 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j382655812096.3982', 'results_file': '/ansible/.ansible_async/j382655812096.3982', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127162 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j512565468769.4007', 'results_file': '/ansible/.ansible_async/j512565468769.4007', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127279 | orchestrator | 2026-02-14 04:49:16.127298 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-14 04:49:16.127312 | orchestrator | Saturday 14 February 2026 04:48:35 +0000 (0:00:09.463) 0:02:30.274 ***** 2026-02-14 04:49:16.127323 | orchestrator | changed: [localhost] => (item=test) 2026-02-14 04:49:16.127336 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-14 04:49:16.127347 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-14 04:49:16.127358 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-14 04:49:16.127369 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-14 04:49:16.127461 | orchestrator | 2026-02-14 04:49:16.127475 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-14 04:49:16.127486 | orchestrator | Saturday 14 February 2026 04:48:40 +0000 (0:00:04.974) 0:02:35.248 ***** 2026-02-14 04:49:16.127498 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-14 04:49:16.127511 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j665392931175.4083', 'results_file': '/ansible/.ansible_async/j665392931175.4083', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127523 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j153833527429.4108', 'results_file': '/ansible/.ansible_async/j153833527429.4108', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127534 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j394787289115.4134', 'results_file': '/ansible/.ansible_async/j394787289115.4134', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127545 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j305231954112.4160', 'results_file': '/ansible/.ansible_async/j305231954112.4160', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127556 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j580169642275.4186', 'results_file': '/ansible/.ansible_async/j580169642275.4186', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-14 04:49:16.127567 | orchestrator | 2026-02-14 04:49:16.127578 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-14 04:49:16.127589 | orchestrator | Saturday 14 February 2026 04:48:50 +0000 (0:00:10.240) 0:02:45.489 ***** 2026-02-14 04:49:16.127599 | orchestrator | changed: [localhost] 2026-02-14 04:49:16.127610 | orchestrator | 2026-02-14 04:49:16.127621 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-14 04:49:16.127632 | orchestrator | Saturday 14 February 2026 04:48:57 +0000 (0:00:06.267) 0:02:51.756 ***** 2026-02-14 04:49:16.127642 | orchestrator | changed: [localhost] 2026-02-14 04:49:16.127653 | orchestrator | 2026-02-14 04:49:16.127664 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-14 04:49:16.127675 | orchestrator | Saturday 14 February 2026 04:49:10 +0000 (0:00:13.433) 0:03:05.190 ***** 2026-02-14 04:49:16.127686 | orchestrator | ok: [localhost] 2026-02-14 04:49:16.127699 | orchestrator | 2026-02-14 04:49:16.127712 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-14 04:49:16.127724 | orchestrator | Saturday 14 February 2026 04:49:15 +0000 (0:00:05.125) 0:03:10.315 ***** 2026-02-14 04:49:16.127736 | orchestrator | ok: [localhost] => { 2026-02-14 04:49:16.127749 | orchestrator |  "msg": "192.168.112.141" 2026-02-14 04:49:16.127761 | orchestrator | } 2026-02-14 04:49:16.127774 | orchestrator | 2026-02-14 04:49:16.127786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:49:16.127800 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 04:49:16.127813 | orchestrator | 2026-02-14 04:49:16.127825 | orchestrator | 2026-02-14 04:49:16.127838 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:49:16.127850 | orchestrator | Saturday 14 February 2026 04:49:15 +0000 (0:00:00.052) 0:03:10.367 ***** 2026-02-14 04:49:16.127862 | orchestrator | =============================================================================== 2026-02-14 04:49:16.127874 | orchestrator | Wait for instance creation to complete --------------------------------- 46.65s 2026-02-14 04:49:16.127885 | orchestrator | Attach test volume ----------------------------------------------------- 13.43s 2026-02-14 04:49:16.127896 | orchestrator | Add member roles to user test ------------------------------------------ 11.70s 2026-02-14 04:49:16.127929 | orchestrator | Create test router ----------------------------------------------------- 11.08s 2026-02-14 04:49:16.127941 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.24s 2026-02-14 04:49:16.127951 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.46s 2026-02-14 04:49:16.127962 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.54s 2026-02-14 04:49:16.127991 | orchestrator | Create test volume ------------------------------------------------------ 6.27s 2026-02-14 04:49:16.128002 | orchestrator | Create test subnet ------------------------------------------------------ 5.42s 2026-02-14 04:49:16.128013 | orchestrator | Create floating ip address ---------------------------------------------- 5.13s 2026-02-14 04:49:16.128024 | orchestrator | Add tag to instances ---------------------------------------------------- 4.97s 2026-02-14 04:49:16.128034 | orchestrator | Create test network ----------------------------------------------------- 4.83s 2026-02-14 04:49:16.128045 | orchestrator | Create ssh security group ----------------------------------------------- 4.76s 2026-02-14 04:49:16.128056 | orchestrator | Create test instances --------------------------------------------------- 4.58s 2026-02-14 04:49:16.128066 | orchestrator | Add metadata to instances ----------------------------------------------- 4.45s 2026-02-14 04:49:16.128077 | orchestrator | Create test user -------------------------------------------------------- 4.41s 2026-02-14 04:49:16.128088 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.33s 2026-02-14 04:49:16.128098 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.19s 2026-02-14 04:49:16.128109 | orchestrator | Create test-admin user -------------------------------------------------- 4.18s 2026-02-14 04:49:16.128120 | orchestrator | Create test server group ------------------------------------------------ 4.13s 2026-02-14 04:49:16.422944 | orchestrator | + server_list 2026-02-14 04:49:16.423057 | orchestrator | + openstack --os-cloud test server list 2026-02-14 04:49:20.290282 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-14 04:49:20.290387 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-14 04:49:20.290457 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-14 04:49:20.290467 | orchestrator | | 14ff744f-6f1e-42e8-af05-176636fa61ce | test-4 | ACTIVE | test=192.168.112.148, 192.168.200.70 | N/A (booted from volume) | SCS-1L-1 | 2026-02-14 04:49:20.290476 | orchestrator | | 0e9b5d8e-8a25-4aba-b77a-25a4c63c930a | test-1 | ACTIVE | test=192.168.112.167, 192.168.200.165 | N/A (booted from volume) | SCS-1L-1 | 2026-02-14 04:49:20.290485 | orchestrator | | b7338acb-0ef6-49cd-ba1e-c1486a2f699b | test-3 | ACTIVE | test=192.168.112.185, 192.168.200.248 | N/A (booted from volume) | SCS-1L-1 | 2026-02-14 04:49:20.290494 | orchestrator | | c879fa26-1f45-400b-bb33-99717f625e9f | test-2 | ACTIVE | test=192.168.112.143, 192.168.200.188 | N/A (booted from volume) | SCS-1L-1 | 2026-02-14 04:49:20.290503 | orchestrator | | cd435bd7-3204-4ea2-851b-4871ed088097 | test | ACTIVE | test=192.168.112.141, 192.168.200.177 | N/A (booted from volume) | SCS-1L-1 | 2026-02-14 04:49:20.290512 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-14 04:49:20.613121 | orchestrator | + openstack --os-cloud test server show test 2026-02-14 04:49:23.647664 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:23.647897 | orchestrator | | Field | Value | 2026-02-14 04:49:23.647938 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:23.647958 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-14 04:49:23.648003 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-14 04:49:23.648015 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-14 04:49:23.648026 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-14 04:49:23.648037 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-14 04:49:23.648049 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-14 04:49:23.648083 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-14 04:49:23.648108 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-14 04:49:23.648150 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-14 04:49:23.648170 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-14 04:49:23.648205 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-14 04:49:23.648226 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-14 04:49:23.648246 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-14 04:49:23.648267 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-14 04:49:23.648288 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-14 04:49:23.648308 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-14T04:48:06.000000 | 2026-02-14 04:49:23.648343 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-14 04:49:23.648384 | orchestrator | | accessIPv4 | | 2026-02-14 04:49:23.648435 | orchestrator | | accessIPv6 | | 2026-02-14 04:49:23.648456 | orchestrator | | addresses | test=192.168.112.141, 192.168.200.177 | 2026-02-14 04:49:23.648483 | orchestrator | | config_drive | | 2026-02-14 04:49:23.648504 | orchestrator | | created | 2026-02-14T04:47:39Z | 2026-02-14 04:49:23.648523 | orchestrator | | description | None | 2026-02-14 04:49:23.648542 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-14 04:49:23.648561 | orchestrator | | hostId | 9e9fa7ab62d6aa5d139075976d6a469af3ddf8f9a96a66cf18f46226 | 2026-02-14 04:49:23.648581 | orchestrator | | host_status | None | 2026-02-14 04:49:23.648649 | orchestrator | | id | cd435bd7-3204-4ea2-851b-4871ed088097 | 2026-02-14 04:49:23.648673 | orchestrator | | image | N/A (booted from volume) | 2026-02-14 04:49:23.648694 | orchestrator | | key_name | test | 2026-02-14 04:49:23.648713 | orchestrator | | locked | False | 2026-02-14 04:49:23.648732 | orchestrator | | locked_reason | None | 2026-02-14 04:49:23.648752 | orchestrator | | name | test | 2026-02-14 04:49:23.648770 | orchestrator | | pinned_availability_zone | None | 2026-02-14 04:49:23.648790 | orchestrator | | progress | 0 | 2026-02-14 04:49:23.648809 | orchestrator | | project_id | dae1182b318e4385b1436c6bf28b0e50 | 2026-02-14 04:49:23.648828 | orchestrator | | properties | hostname='test' | 2026-02-14 04:49:23.648882 | orchestrator | | security_groups | name='ssh' | 2026-02-14 04:49:23.648902 | orchestrator | | | name='icmp' | 2026-02-14 04:49:23.648920 | orchestrator | | server_groups | None | 2026-02-14 04:49:23.648940 | orchestrator | | status | ACTIVE | 2026-02-14 04:49:23.648964 | orchestrator | | tags | test | 2026-02-14 04:49:23.648982 | orchestrator | | trusted_image_certificates | None | 2026-02-14 04:49:23.648994 | orchestrator | | updated | 2026-02-14T04:48:27Z | 2026-02-14 04:49:23.649005 | orchestrator | | user_id | 994cc450ba994678adab99c29b111b4c | 2026-02-14 04:49:23.649016 | orchestrator | | volumes_attached | delete_on_termination='True', id='d151aaa2-8795-4ebc-8885-c09ef4aa8d26' | 2026-02-14 04:49:23.649033 | orchestrator | | | delete_on_termination='False', id='35d35655-2f56-4e5b-989f-c2dda87c0c9e' | 2026-02-14 04:49:23.651060 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:23.926650 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-14 04:49:26.982629 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:26.982731 | orchestrator | | Field | Value | 2026-02-14 04:49:26.982752 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:26.982764 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-14 04:49:26.982775 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-14 04:49:26.982786 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-14 04:49:26.982797 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-14 04:49:26.982824 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-14 04:49:26.982835 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-14 04:49:26.982863 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-14 04:49:26.982875 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-14 04:49:26.982886 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-14 04:49:26.982900 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-14 04:49:26.982911 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-14 04:49:26.982921 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-14 04:49:26.982931 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-14 04:49:26.982947 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-14 04:49:26.982958 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-14 04:49:26.982968 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-14T04:48:06.000000 | 2026-02-14 04:49:26.982985 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-14 04:49:26.982997 | orchestrator | | accessIPv4 | | 2026-02-14 04:49:26.983008 | orchestrator | | accessIPv6 | | 2026-02-14 04:49:26.983022 | orchestrator | | addresses | test=192.168.112.167, 192.168.200.165 | 2026-02-14 04:49:26.983033 | orchestrator | | config_drive | | 2026-02-14 04:49:26.983043 | orchestrator | | created | 2026-02-14T04:47:41Z | 2026-02-14 04:49:26.983060 | orchestrator | | description | None | 2026-02-14 04:49:26.983070 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-14 04:49:26.983081 | orchestrator | | hostId | 9e9fa7ab62d6aa5d139075976d6a469af3ddf8f9a96a66cf18f46226 | 2026-02-14 04:49:26.983091 | orchestrator | | host_status | None | 2026-02-14 04:49:26.983108 | orchestrator | | id | 0e9b5d8e-8a25-4aba-b77a-25a4c63c930a | 2026-02-14 04:49:26.983119 | orchestrator | | image | N/A (booted from volume) | 2026-02-14 04:49:26.983130 | orchestrator | | key_name | test | 2026-02-14 04:49:26.983144 | orchestrator | | locked | False | 2026-02-14 04:49:26.983155 | orchestrator | | locked_reason | None | 2026-02-14 04:49:26.983168 | orchestrator | | name | test-1 | 2026-02-14 04:49:26.983185 | orchestrator | | pinned_availability_zone | None | 2026-02-14 04:49:26.983197 | orchestrator | | progress | 0 | 2026-02-14 04:49:26.983209 | orchestrator | | project_id | dae1182b318e4385b1436c6bf28b0e50 | 2026-02-14 04:49:26.983221 | orchestrator | | properties | hostname='test-1' | 2026-02-14 04:49:26.983240 | orchestrator | | security_groups | name='ssh' | 2026-02-14 04:49:26.983253 | orchestrator | | | name='icmp' | 2026-02-14 04:49:26.983266 | orchestrator | | server_groups | None | 2026-02-14 04:49:26.983278 | orchestrator | | status | ACTIVE | 2026-02-14 04:49:26.983290 | orchestrator | | tags | test | 2026-02-14 04:49:26.983307 | orchestrator | | trusted_image_certificates | None | 2026-02-14 04:49:26.983319 | orchestrator | | updated | 2026-02-14T04:48:28Z | 2026-02-14 04:49:26.983331 | orchestrator | | user_id | 994cc450ba994678adab99c29b111b4c | 2026-02-14 04:49:26.983344 | orchestrator | | volumes_attached | delete_on_termination='True', id='a4c01d32-f5e7-4418-aa7a-229c2bfb56ec' | 2026-02-14 04:49:26.985685 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:27.246370 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-14 04:49:30.234814 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:30.234924 | orchestrator | | Field | Value | 2026-02-14 04:49:30.234960 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:30.234978 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-14 04:49:30.235010 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-14 04:49:30.235022 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-14 04:49:30.235033 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-14 04:49:30.235045 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-14 04:49:30.235056 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-14 04:49:30.235084 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-14 04:49:30.235097 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-14 04:49:30.235108 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-14 04:49:30.235119 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-14 04:49:30.235143 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-14 04:49:30.235154 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-14 04:49:30.235165 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-14 04:49:30.235176 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-14 04:49:30.235187 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-14 04:49:30.235198 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-14T04:48:07.000000 | 2026-02-14 04:49:30.235217 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-14 04:49:30.235229 | orchestrator | | accessIPv4 | | 2026-02-14 04:49:30.235240 | orchestrator | | accessIPv6 | | 2026-02-14 04:49:30.235263 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.188 | 2026-02-14 04:49:30.235274 | orchestrator | | config_drive | | 2026-02-14 04:49:30.235285 | orchestrator | | created | 2026-02-14T04:47:41Z | 2026-02-14 04:49:30.235296 | orchestrator | | description | None | 2026-02-14 04:49:30.235307 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-14 04:49:30.235318 | orchestrator | | hostId | adfcbbdba7f9cb0d3805df013e32bc2adaf2139b3b0885d1dd931076 | 2026-02-14 04:49:30.235330 | orchestrator | | host_status | None | 2026-02-14 04:49:30.235348 | orchestrator | | id | c879fa26-1f45-400b-bb33-99717f625e9f | 2026-02-14 04:49:30.235361 | orchestrator | | image | N/A (booted from volume) | 2026-02-14 04:49:30.235375 | orchestrator | | key_name | test | 2026-02-14 04:49:30.235443 | orchestrator | | locked | False | 2026-02-14 04:49:30.235470 | orchestrator | | locked_reason | None | 2026-02-14 04:49:30.235489 | orchestrator | | name | test-2 | 2026-02-14 04:49:30.235509 | orchestrator | | pinned_availability_zone | None | 2026-02-14 04:49:30.235522 | orchestrator | | progress | 0 | 2026-02-14 04:49:30.235535 | orchestrator | | project_id | dae1182b318e4385b1436c6bf28b0e50 | 2026-02-14 04:49:30.235547 | orchestrator | | properties | hostname='test-2' | 2026-02-14 04:49:30.235569 | orchestrator | | security_groups | name='ssh' | 2026-02-14 04:49:30.235583 | orchestrator | | | name='icmp' | 2026-02-14 04:49:30.235605 | orchestrator | | server_groups | None | 2026-02-14 04:49:30.235623 | orchestrator | | status | ACTIVE | 2026-02-14 04:49:30.235637 | orchestrator | | tags | test | 2026-02-14 04:49:30.235650 | orchestrator | | trusted_image_certificates | None | 2026-02-14 04:49:30.235663 | orchestrator | | updated | 2026-02-14T04:48:29Z | 2026-02-14 04:49:30.235676 | orchestrator | | user_id | 994cc450ba994678adab99c29b111b4c | 2026-02-14 04:49:30.235689 | orchestrator | | volumes_attached | delete_on_termination='True', id='355014ae-4c9f-4d1d-be03-dc15b65115b9' | 2026-02-14 04:49:30.239292 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:30.504462 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-14 04:49:33.391365 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:33.391555 | orchestrator | | Field | Value | 2026-02-14 04:49:33.391575 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:33.391601 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-14 04:49:33.391614 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-14 04:49:33.391625 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-14 04:49:33.391637 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-14 04:49:33.391648 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-14 04:49:33.391659 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-14 04:49:33.391692 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-14 04:49:33.391717 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-14 04:49:33.391736 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-14 04:49:33.391757 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-14 04:49:33.391775 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-14 04:49:33.391793 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-14 04:49:33.391813 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-14 04:49:33.391831 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-14 04:49:33.391850 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-14 04:49:33.391868 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-14T04:48:07.000000 | 2026-02-14 04:49:33.391898 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-14 04:49:33.391932 | orchestrator | | accessIPv4 | | 2026-02-14 04:49:33.391953 | orchestrator | | accessIPv6 | | 2026-02-14 04:49:33.391974 | orchestrator | | addresses | test=192.168.112.185, 192.168.200.248 | 2026-02-14 04:49:33.392607 | orchestrator | | config_drive | | 2026-02-14 04:49:33.392642 | orchestrator | | created | 2026-02-14T04:47:41Z | 2026-02-14 04:49:33.392654 | orchestrator | | description | None | 2026-02-14 04:49:33.392666 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-14 04:49:33.392677 | orchestrator | | hostId | adfcbbdba7f9cb0d3805df013e32bc2adaf2139b3b0885d1dd931076 | 2026-02-14 04:49:33.392688 | orchestrator | | host_status | None | 2026-02-14 04:49:33.392724 | orchestrator | | id | b7338acb-0ef6-49cd-ba1e-c1486a2f699b | 2026-02-14 04:49:33.392741 | orchestrator | | image | N/A (booted from volume) | 2026-02-14 04:49:33.392753 | orchestrator | | key_name | test | 2026-02-14 04:49:33.392764 | orchestrator | | locked | False | 2026-02-14 04:49:33.392775 | orchestrator | | locked_reason | None | 2026-02-14 04:49:33.392786 | orchestrator | | name | test-3 | 2026-02-14 04:49:33.392797 | orchestrator | | pinned_availability_zone | None | 2026-02-14 04:49:33.392809 | orchestrator | | progress | 0 | 2026-02-14 04:49:33.392821 | orchestrator | | project_id | dae1182b318e4385b1436c6bf28b0e50 | 2026-02-14 04:49:33.392838 | orchestrator | | properties | hostname='test-3' | 2026-02-14 04:49:33.392858 | orchestrator | | security_groups | name='ssh' | 2026-02-14 04:49:33.392874 | orchestrator | | | name='icmp' | 2026-02-14 04:49:33.392886 | orchestrator | | server_groups | None | 2026-02-14 04:49:33.392897 | orchestrator | | status | ACTIVE | 2026-02-14 04:49:33.392908 | orchestrator | | tags | test | 2026-02-14 04:49:33.392919 | orchestrator | | trusted_image_certificates | None | 2026-02-14 04:49:33.392931 | orchestrator | | updated | 2026-02-14T04:48:30Z | 2026-02-14 04:49:33.392942 | orchestrator | | user_id | 994cc450ba994678adab99c29b111b4c | 2026-02-14 04:49:33.392963 | orchestrator | | volumes_attached | delete_on_termination='True', id='4fb98d75-f828-4155-9bc6-a08c44b494f2' | 2026-02-14 04:49:33.395351 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:33.632825 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-14 04:49:36.565420 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:36.565551 | orchestrator | | Field | Value | 2026-02-14 04:49:36.565569 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:36.565581 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-14 04:49:36.565592 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-14 04:49:36.565604 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-14 04:49:36.565615 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-14 04:49:36.565647 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-14 04:49:36.565658 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-14 04:49:36.565690 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-14 04:49:36.565702 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-14 04:49:36.565718 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-14 04:49:36.565730 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-14 04:49:36.565742 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-14 04:49:36.565753 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-14 04:49:36.565765 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-14 04:49:36.565784 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-14 04:49:36.565813 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-14 04:49:36.565832 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-14T04:48:08.000000 | 2026-02-14 04:49:36.565863 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-14 04:49:36.565891 | orchestrator | | accessIPv4 | | 2026-02-14 04:49:36.565910 | orchestrator | | accessIPv6 | | 2026-02-14 04:49:36.565927 | orchestrator | | addresses | test=192.168.112.148, 192.168.200.70 | 2026-02-14 04:49:36.565941 | orchestrator | | config_drive | | 2026-02-14 04:49:36.565954 | orchestrator | | created | 2026-02-14T04:47:43Z | 2026-02-14 04:49:36.565967 | orchestrator | | description | None | 2026-02-14 04:49:36.565987 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-14 04:49:36.566000 | orchestrator | | hostId | 2ed025619af07d07137a90c0f17d3e32ae0d289e88fd8e559365d4a0 | 2026-02-14 04:49:36.566013 | orchestrator | | host_status | None | 2026-02-14 04:49:36.566101 | orchestrator | | id | 14ff744f-6f1e-42e8-af05-176636fa61ce | 2026-02-14 04:49:36.566121 | orchestrator | | image | N/A (booted from volume) | 2026-02-14 04:49:36.566134 | orchestrator | | key_name | test | 2026-02-14 04:49:36.566182 | orchestrator | | locked | False | 2026-02-14 04:49:36.566195 | orchestrator | | locked_reason | None | 2026-02-14 04:49:36.566208 | orchestrator | | name | test-4 | 2026-02-14 04:49:36.566229 | orchestrator | | pinned_availability_zone | None | 2026-02-14 04:49:36.566243 | orchestrator | | progress | 0 | 2026-02-14 04:49:36.566256 | orchestrator | | project_id | dae1182b318e4385b1436c6bf28b0e50 | 2026-02-14 04:49:36.566269 | orchestrator | | properties | hostname='test-4' | 2026-02-14 04:49:36.566291 | orchestrator | | security_groups | name='ssh' | 2026-02-14 04:49:36.566309 | orchestrator | | | name='icmp' | 2026-02-14 04:49:36.566321 | orchestrator | | server_groups | None | 2026-02-14 04:49:36.566332 | orchestrator | | status | ACTIVE | 2026-02-14 04:49:36.566343 | orchestrator | | tags | test | 2026-02-14 04:49:36.566362 | orchestrator | | trusted_image_certificates | None | 2026-02-14 04:49:36.566373 | orchestrator | | updated | 2026-02-14T04:48:30Z | 2026-02-14 04:49:36.566412 | orchestrator | | user_id | 994cc450ba994678adab99c29b111b4c | 2026-02-14 04:49:36.566425 | orchestrator | | volumes_attached | delete_on_termination='True', id='eeaf954a-67e8-4b92-b5f3-a19e31ba3ff6' | 2026-02-14 04:49:36.569315 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-14 04:49:36.809982 | orchestrator | + server_ping 2026-02-14 04:49:36.811637 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-14 04:49:36.811655 | orchestrator | ++ tr -d '\r' 2026-02-14 04:49:39.656217 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-14 04:49:39.656321 | orchestrator | + ping -c3 192.168.112.143 2026-02-14 04:49:39.673212 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-02-14 04:49:39.673300 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=9.44 ms 2026-02-14 04:49:40.668001 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.31 ms 2026-02-14 04:49:41.669749 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.95 ms 2026-02-14 04:49:41.669838 | orchestrator | 2026-02-14 04:49:41.669849 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-02-14 04:49:41.669859 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-14 04:49:41.669867 | orchestrator | rtt min/avg/max/mdev = 1.951/4.568/9.440/3.447 ms 2026-02-14 04:49:41.669875 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-14 04:49:41.669883 | orchestrator | + ping -c3 192.168.112.167 2026-02-14 04:49:41.680621 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2026-02-14 04:49:41.680674 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=8.27 ms 2026-02-14 04:49:42.677046 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.90 ms 2026-02-14 04:49:43.678075 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.08 ms 2026-02-14 04:49:43.678177 | orchestrator | 2026-02-14 04:49:43.678194 | orchestrator | --- 192.168.112.167 ping statistics --- 2026-02-14 04:49:43.678207 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-14 04:49:43.678247 | orchestrator | rtt min/avg/max/mdev = 2.080/4.418/8.274/2.747 ms 2026-02-14 04:49:43.678260 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-14 04:49:43.678271 | orchestrator | + ping -c3 192.168.112.185 2026-02-14 04:49:43.689883 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-02-14 04:49:43.689961 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=7.16 ms 2026-02-14 04:49:44.686876 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.77 ms 2026-02-14 04:49:45.688735 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.25 ms 2026-02-14 04:49:45.688855 | orchestrator | 2026-02-14 04:49:45.688873 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-02-14 04:49:45.688885 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-14 04:49:45.688978 | orchestrator | rtt min/avg/max/mdev = 2.248/4.056/7.157/2.202 ms 2026-02-14 04:49:45.689297 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-14 04:49:45.689322 | orchestrator | + ping -c3 192.168.112.148 2026-02-14 04:49:45.701438 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-02-14 04:49:45.701537 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=6.85 ms 2026-02-14 04:49:46.697793 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.37 ms 2026-02-14 04:49:47.699010 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.99 ms 2026-02-14 04:49:47.699117 | orchestrator | 2026-02-14 04:49:47.699133 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-02-14 04:49:47.699266 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-14 04:49:47.699353 | orchestrator | rtt min/avg/max/mdev = 1.988/3.735/6.847/2.205 ms 2026-02-14 04:49:47.699450 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-14 04:49:47.699467 | orchestrator | + ping -c3 192.168.112.141 2026-02-14 04:49:47.714857 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2026-02-14 04:49:47.714920 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=10.2 ms 2026-02-14 04:49:48.708243 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.78 ms 2026-02-14 04:49:49.709778 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=1.62 ms 2026-02-14 04:49:49.709893 | orchestrator | 2026-02-14 04:49:49.709911 | orchestrator | --- 192.168.112.141 ping statistics --- 2026-02-14 04:49:49.709925 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-14 04:49:49.709937 | orchestrator | rtt min/avg/max/mdev = 1.618/4.872/10.218/3.809 ms 2026-02-14 04:49:49.709959 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-14 04:49:49.815452 | orchestrator | ok: Runtime: 0:10:05.595169 2026-02-14 04:49:49.864548 | 2026-02-14 04:49:49.864855 | TASK [Run tempest] 2026-02-14 04:49:50.399054 | orchestrator | skipping: Conditional result was False 2026-02-14 04:49:50.410559 | 2026-02-14 04:49:50.410706 | TASK [Check prometheus alert status] 2026-02-14 04:49:50.957088 | orchestrator | skipping: Conditional result was False 2026-02-14 04:49:50.969957 | 2026-02-14 04:49:50.970077 | PLAY [Upgrade testbed] 2026-02-14 04:49:50.982517 | 2026-02-14 04:49:50.982665 | TASK [Print next ceph version] 2026-02-14 04:49:51.065450 | orchestrator | ok 2026-02-14 04:49:51.078641 | 2026-02-14 04:49:51.078804 | TASK [Print next openstack version] 2026-02-14 04:49:51.160834 | orchestrator | ok 2026-02-14 04:49:51.172000 | 2026-02-14 04:49:51.172121 | TASK [Print next manager version] 2026-02-14 04:49:51.240693 | orchestrator | ok 2026-02-14 04:49:51.250979 | 2026-02-14 04:49:51.251150 | TASK [Set cloud fact (Zuul deployment)] 2026-02-14 04:49:51.297836 | orchestrator | ok 2026-02-14 04:49:51.308561 | 2026-02-14 04:49:51.308692 | TASK [Set cloud fact (local deployment)] 2026-02-14 04:49:51.333406 | orchestrator | skipping: Conditional result was False 2026-02-14 04:49:51.345724 | 2026-02-14 04:49:51.345857 | TASK [Fetch manager address] 2026-02-14 04:49:51.618122 | orchestrator | ok 2026-02-14 04:49:51.631585 | 2026-02-14 04:49:51.631751 | TASK [Set manager_host address] 2026-02-14 04:49:51.711565 | orchestrator | ok 2026-02-14 04:49:51.722238 | 2026-02-14 04:49:51.722376 | TASK [Run upgrade] 2026-02-14 04:49:52.400157 | orchestrator | + set -e 2026-02-14 04:49:52.400287 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:49:52.400298 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:49:52.400307 | orchestrator | + CEPH_VERSION=reef 2026-02-14 04:49:52.400313 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-14 04:49:52.400318 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-14 04:49:52.400327 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-14 04:49:52.410434 | orchestrator | + set -e 2026-02-14 04:49:52.410474 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:49:52.410534 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:49:52.410572 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:49:52.410577 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:49:52.410586 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:49:52.412583 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-14 04:49:52.455736 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-14 04:49:52.457055 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-14 04:49:52.500131 | orchestrator | 2026-02-14 04:49:52.500240 | orchestrator | # UPGRADE MANAGER 2026-02-14 04:49:52.500297 | orchestrator | 2026-02-14 04:49:52.500361 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-14 04:49:52.500426 | orchestrator | + echo 2026-02-14 04:49:52.500445 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-14 04:49:52.500466 | orchestrator | + echo 2026-02-14 04:49:52.500484 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:49:52.500503 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:49:52.500521 | orchestrator | + CEPH_VERSION=reef 2026-02-14 04:49:52.500539 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-14 04:49:52.500556 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-14 04:49:52.500575 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-14 04:49:52.504963 | orchestrator | + set -e 2026-02-14 04:49:52.505023 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-14 04:49:52.505036 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:49:52.510234 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-14 04:49:52.510271 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:49:52.514866 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:49:52.520056 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-14 04:49:52.529078 | orchestrator | + set -e 2026-02-14 04:49:52.529316 | orchestrator | /opt/configuration ~ 2026-02-14 04:49:52.529407 | orchestrator | + pushd /opt/configuration 2026-02-14 04:49:52.529424 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 04:49:52.529438 | orchestrator | + source /opt/venv/bin/activate 2026-02-14 04:49:52.531025 | orchestrator | ++ deactivate nondestructive 2026-02-14 04:49:52.531655 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:52.531687 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:52.531699 | orchestrator | ++ hash -r 2026-02-14 04:49:52.531710 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:52.531721 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-14 04:49:52.531731 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-14 04:49:52.531742 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-14 04:49:52.531755 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-14 04:49:52.531766 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-14 04:49:52.531776 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-14 04:49:52.531787 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-14 04:49:52.531798 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:52.531810 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:52.532007 | orchestrator | ++ export PATH 2026-02-14 04:49:52.532043 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:52.532063 | orchestrator | ++ '[' -z '' ']' 2026-02-14 04:49:52.532083 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-14 04:49:52.532102 | orchestrator | ++ PS1='(venv) ' 2026-02-14 04:49:52.532121 | orchestrator | ++ export PS1 2026-02-14 04:49:52.532139 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-14 04:49:52.532159 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-14 04:49:52.532178 | orchestrator | ++ hash -r 2026-02-14 04:49:52.532327 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-14 04:49:53.654288 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-14 04:49:53.655578 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-14 04:49:53.656855 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-14 04:49:53.658238 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-14 04:49:53.659506 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-14 04:49:53.669577 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-14 04:49:53.671109 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-14 04:49:53.672278 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-14 04:49:53.673643 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-14 04:49:53.705314 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-14 04:49:53.706668 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-14 04:49:53.708633 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-14 04:49:53.709831 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-14 04:49:53.713939 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-14 04:49:53.919837 | orchestrator | ++ which gilt 2026-02-14 04:49:53.920995 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-14 04:49:53.921029 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-14 04:49:54.226905 | orchestrator | osism.cfg-generics: 2026-02-14 04:49:54.345754 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-14 04:49:54.346951 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-14 04:49:54.348652 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-14 04:49:54.348820 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-14 04:49:55.305012 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-14 04:49:55.318992 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-14 04:49:55.647803 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-14 04:49:55.727052 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 04:49:55.727146 | orchestrator | + deactivate 2026-02-14 04:49:55.727160 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-14 04:49:55.727171 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:55.727180 | orchestrator | + export PATH 2026-02-14 04:49:55.727189 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-14 04:49:55.727198 | orchestrator | + '[' -n '' ']' 2026-02-14 04:49:55.727207 | orchestrator | + hash -r 2026-02-14 04:49:55.727215 | orchestrator | + '[' -n '' ']' 2026-02-14 04:49:55.727224 | orchestrator | + unset VIRTUAL_ENV 2026-02-14 04:49:55.727232 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-14 04:49:55.727241 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-14 04:49:55.727250 | orchestrator | + unset -f deactivate 2026-02-14 04:49:55.727271 | orchestrator | ~ 2026-02-14 04:49:55.727281 | orchestrator | + popd 2026-02-14 04:49:55.729432 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-14 04:49:55.729488 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-14 04:49:55.737598 | orchestrator | + set -e 2026-02-14 04:49:55.737648 | orchestrator | + NAMESPACE=kolla/release 2026-02-14 04:49:55.737657 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-14 04:49:55.746045 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-14 04:49:55.754220 | orchestrator | /opt/configuration ~ 2026-02-14 04:49:55.754266 | orchestrator | + set -e 2026-02-14 04:49:55.754275 | orchestrator | + pushd /opt/configuration 2026-02-14 04:49:55.754282 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 04:49:55.754289 | orchestrator | + source /opt/venv/bin/activate 2026-02-14 04:49:55.754456 | orchestrator | ++ deactivate nondestructive 2026-02-14 04:49:55.754682 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:55.754752 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:55.754766 | orchestrator | ++ hash -r 2026-02-14 04:49:55.754771 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:55.754775 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-14 04:49:55.754781 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-14 04:49:55.754786 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-14 04:49:55.754983 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-14 04:49:55.754991 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-14 04:49:55.755192 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-14 04:49:55.755202 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-14 04:49:55.755207 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:55.755214 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:55.755221 | orchestrator | ++ export PATH 2026-02-14 04:49:55.755226 | orchestrator | ++ '[' -n '' ']' 2026-02-14 04:49:55.755336 | orchestrator | ++ '[' -z '' ']' 2026-02-14 04:49:55.755413 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-14 04:49:55.755419 | orchestrator | ++ PS1='(venv) ' 2026-02-14 04:49:55.755423 | orchestrator | ++ export PS1 2026-02-14 04:49:55.755427 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-14 04:49:55.755431 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-14 04:49:55.755451 | orchestrator | ++ hash -r 2026-02-14 04:49:55.755526 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-14 04:49:56.279091 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-14 04:49:56.279199 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-14 04:49:56.280718 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-14 04:49:56.281949 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-14 04:49:56.283070 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-14 04:49:56.294224 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-14 04:49:56.295737 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-14 04:49:56.297015 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-14 04:49:56.298667 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-14 04:49:56.331636 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-14 04:49:56.333049 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-14 04:49:56.334884 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-14 04:49:56.336558 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-14 04:49:56.340763 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-14 04:49:56.552856 | orchestrator | ++ which gilt 2026-02-14 04:49:56.555637 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-14 04:49:56.555675 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-14 04:49:56.737098 | orchestrator | osism.cfg-generics: 2026-02-14 04:49:56.802513 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-14 04:49:56.802621 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-14 04:49:56.802949 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-14 04:49:56.803661 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-14 04:49:57.504776 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-14 04:49:57.515871 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-14 04:49:57.837881 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-14 04:49:57.904661 | orchestrator | ~ 2026-02-14 04:49:57.904771 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-14 04:49:57.904791 | orchestrator | + deactivate 2026-02-14 04:49:57.904834 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-14 04:49:57.904852 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-14 04:49:57.904865 | orchestrator | + export PATH 2026-02-14 04:49:57.904879 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-14 04:49:57.904892 | orchestrator | + '[' -n '' ']' 2026-02-14 04:49:57.904905 | orchestrator | + hash -r 2026-02-14 04:49:57.904918 | orchestrator | + '[' -n '' ']' 2026-02-14 04:49:57.904931 | orchestrator | + unset VIRTUAL_ENV 2026-02-14 04:49:57.904944 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-14 04:49:57.904957 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-14 04:49:57.904969 | orchestrator | + unset -f deactivate 2026-02-14 04:49:57.904982 | orchestrator | + popd 2026-02-14 04:49:57.906269 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-14 04:49:57.952838 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-14 04:49:57.953010 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-14 04:49:58.043307 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 04:49:58.043457 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-14 04:49:58.047173 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-14 04:49:58.050049 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-14 04:49:58.082494 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-14 04:49:58.082754 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-14 04:49:58.146746 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-14 04:49:58.146840 | orchestrator | ++ echo true 2026-02-14 04:49:58.146851 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-14 04:49:58.148885 | orchestrator | +++ semver 2024.2 2024.2 2026-02-14 04:49:58.203891 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-14 04:49:58.204548 | orchestrator | +++ semver 2024.2 2025.1 2026-02-14 04:49:58.259951 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-14 04:49:58.260036 | orchestrator | ++ echo false 2026-02-14 04:49:58.261430 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-14 04:49:58.261458 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-14 04:49:58.261467 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-14 04:49:58.261474 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-14 04:49:58.261484 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:49:58.267534 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-14 04:49:58.267575 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-14 04:49:58.289336 | orchestrator | export RABBITMQ3TO4=true 2026-02-14 04:49:58.292633 | orchestrator | + osism update manager 2026-02-14 04:50:03.894660 | orchestrator | Collecting uv 2026-02-14 04:50:03.977781 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-14 04:50:03.993764 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.0 MB) 2026-02-14 04:50:04.750328 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.0/23.0 MB 34.9 MB/s eta 0:00:00 2026-02-14 04:50:04.807882 | orchestrator | Installing collected packages: uv 2026-02-14 04:50:05.269893 | orchestrator | Successfully installed uv-0.10.2 2026-02-14 04:50:06.000224 | orchestrator | Resolved 11 packages in 322ms 2026-02-14 04:50:06.015697 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-14 04:50:06.040815 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-14 04:50:06.041882 | orchestrator | Downloading ansible (54.5MiB) 2026-02-14 04:50:06.042169 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-14 04:50:06.409328 | orchestrator | Downloaded netaddr 2026-02-14 04:50:06.537296 | orchestrator | Downloaded cryptography 2026-02-14 04:50:06.638527 | orchestrator | Downloaded ansible-core 2026-02-14 04:50:13.069233 | orchestrator | Downloaded ansible 2026-02-14 04:50:13.069775 | orchestrator | Prepared 11 packages in 7.06s 2026-02-14 04:50:13.643786 | orchestrator | Installed 11 packages in 568ms 2026-02-14 04:50:13.643885 | orchestrator | + ansible==11.11.0 2026-02-14 04:50:13.643900 | orchestrator | + ansible-core==2.18.13 2026-02-14 04:50:13.643912 | orchestrator | + cffi==2.0.0 2026-02-14 04:50:13.643925 | orchestrator | + cryptography==46.0.5 2026-02-14 04:50:13.643936 | orchestrator | + jinja2==3.1.6 2026-02-14 04:50:13.643947 | orchestrator | + markupsafe==3.0.3 2026-02-14 04:50:13.643957 | orchestrator | + netaddr==1.3.0 2026-02-14 04:50:13.643968 | orchestrator | + packaging==26.0 2026-02-14 04:50:13.643979 | orchestrator | + pycparser==3.0 2026-02-14 04:50:13.643989 | orchestrator | + pyyaml==6.0.3 2026-02-14 04:50:13.644000 | orchestrator | + resolvelib==1.0.1 2026-02-14 04:50:14.750010 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200224xam8e2zq/tmpcgtep43q/ansible-collection-servicesv7vgn9_z'... 2026-02-14 04:50:16.108445 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-14 04:50:16.108544 | orchestrator | Already on 'main' 2026-02-14 04:50:16.585502 | orchestrator | Starting galaxy collection install process 2026-02-14 04:50:16.585617 | orchestrator | Process install dependency map 2026-02-14 04:50:16.585634 | orchestrator | Starting collection install process 2026-02-14 04:50:16.585645 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-14 04:50:16.585657 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-14 04:50:16.585667 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-14 04:50:17.075254 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200343982kanpk/tmprx7l2dbn/ansible-playbooks-managerej_if5u6'... 2026-02-14 04:50:17.657302 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-14 04:50:17.657470 | orchestrator | Already on 'main' 2026-02-14 04:50:17.921238 | orchestrator | Starting galaxy collection install process 2026-02-14 04:50:17.921338 | orchestrator | Process install dependency map 2026-02-14 04:50:17.921353 | orchestrator | Starting collection install process 2026-02-14 04:50:17.921414 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-14 04:50:17.921430 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-14 04:50:17.921441 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-14 04:50:18.534986 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-14 04:50:18.535082 | orchestrator | -vvvv to see details 2026-02-14 04:50:18.942819 | orchestrator | 2026-02-14 04:50:18.942940 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-14 04:50:18.942964 | orchestrator | 2026-02-14 04:50:18.942982 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-14 04:50:22.807327 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:22.807495 | orchestrator | 2026-02-14 04:50:22.807513 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-14 04:50:22.867516 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 04:50:22.867615 | orchestrator | 2026-02-14 04:50:22.867651 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-14 04:50:24.570149 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:24.570260 | orchestrator | 2026-02-14 04:50:24.570280 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-14 04:50:24.630918 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:24.631039 | orchestrator | 2026-02-14 04:50:24.631066 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-14 04:50:24.693107 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-14 04:50:24.693186 | orchestrator | 2026-02-14 04:50:24.693195 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-14 04:50:28.857517 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-14 04:50:28.857624 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-14 04:50:28.857640 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-14 04:50:28.857663 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-14 04:50:28.857675 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-14 04:50:28.857685 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-14 04:50:28.857696 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-14 04:50:28.857706 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-14 04:50:28.857718 | orchestrator | 2026-02-14 04:50:28.857730 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-14 04:50:29.953869 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:29.953962 | orchestrator | 2026-02-14 04:50:29.953975 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-14 04:50:30.803723 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:30.803826 | orchestrator | 2026-02-14 04:50:30.803841 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-14 04:50:30.914090 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-14 04:50:30.914187 | orchestrator | 2026-02-14 04:50:30.914202 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-14 04:50:32.716990 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-14 04:50:32.717093 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-14 04:50:32.717110 | orchestrator | 2026-02-14 04:50:32.717123 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-14 04:50:33.637902 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:33.638069 | orchestrator | 2026-02-14 04:50:33.638092 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-14 04:50:33.707407 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:50:33.707507 | orchestrator | 2026-02-14 04:50:33.707524 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-14 04:50:33.811298 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-14 04:50:33.811431 | orchestrator | 2026-02-14 04:50:33.811447 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-14 04:50:34.760841 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:34.760977 | orchestrator | 2026-02-14 04:50:34.761005 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-14 04:50:34.827085 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-14 04:50:34.827171 | orchestrator | 2026-02-14 04:50:34.827184 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-14 04:50:36.829759 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-14 04:50:36.829862 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-14 04:50:36.829877 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:36.829891 | orchestrator | 2026-02-14 04:50:36.829903 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-14 04:50:37.807933 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:37.808059 | orchestrator | 2026-02-14 04:50:37.808086 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-14 04:50:37.878908 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:50:37.879033 | orchestrator | 2026-02-14 04:50:37.879049 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-14 04:50:37.995487 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-14 04:50:37.995568 | orchestrator | 2026-02-14 04:50:37.995578 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-14 04:50:38.677875 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:38.678825 | orchestrator | 2026-02-14 04:50:38.678860 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-14 04:50:39.209994 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:39.210158 | orchestrator | 2026-02-14 04:50:39.210173 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-14 04:50:41.125769 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-14 04:50:41.125885 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-14 04:50:41.125901 | orchestrator | 2026-02-14 04:50:41.125914 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-14 04:50:42.263587 | orchestrator | changed: [testbed-manager] 2026-02-14 04:50:42.263689 | orchestrator | 2026-02-14 04:50:42.263704 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-14 04:50:42.884155 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:42.884266 | orchestrator | 2026-02-14 04:50:42.884287 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-14 04:50:43.457773 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:43.457876 | orchestrator | 2026-02-14 04:50:43.457915 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-14 04:50:43.517933 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:50:43.518091 | orchestrator | 2026-02-14 04:50:43.518109 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-14 04:50:43.615787 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-14 04:50:43.615871 | orchestrator | 2026-02-14 04:50:43.615886 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-14 04:50:43.678303 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:43.679206 | orchestrator | 2026-02-14 04:50:43.679236 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-14 04:50:46.724082 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-14 04:50:46.724198 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-14 04:50:46.724215 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-14 04:50:46.724227 | orchestrator | 2026-02-14 04:50:46.724240 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-14 04:50:47.748994 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:47.749117 | orchestrator | 2026-02-14 04:50:47.749146 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-14 04:50:48.796211 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:48.797084 | orchestrator | 2026-02-14 04:50:48.797115 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-14 04:50:49.831966 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:49.832068 | orchestrator | 2026-02-14 04:50:49.832083 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-14 04:50:49.923651 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-14 04:50:49.923744 | orchestrator | 2026-02-14 04:50:49.923758 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-14 04:50:49.987645 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:49.987739 | orchestrator | 2026-02-14 04:50:49.987753 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-14 04:50:51.029643 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-14 04:50:51.029761 | orchestrator | 2026-02-14 04:50:51.029784 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-14 04:50:51.125301 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-14 04:50:51.125453 | orchestrator | 2026-02-14 04:50:51.125472 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-14 04:50:52.132716 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:52.132817 | orchestrator | 2026-02-14 04:50:52.132832 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-14 04:50:53.288792 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:53.288892 | orchestrator | 2026-02-14 04:50:53.288906 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-14 04:50:53.362755 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:50:53.362838 | orchestrator | 2026-02-14 04:50:53.362849 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-14 04:50:53.419830 | orchestrator | ok: [testbed-manager] 2026-02-14 04:50:53.419924 | orchestrator | 2026-02-14 04:50:53.419942 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-14 04:50:54.747963 | orchestrator | changed: [testbed-manager] 2026-02-14 04:50:54.748062 | orchestrator | 2026-02-14 04:50:54.748075 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-14 04:51:59.447139 | orchestrator | changed: [testbed-manager] 2026-02-14 04:51:59.447220 | orchestrator | 2026-02-14 04:51:59.447227 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-14 04:52:00.631894 | orchestrator | ok: [testbed-manager] 2026-02-14 04:52:00.631984 | orchestrator | 2026-02-14 04:52:00.631997 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-14 04:52:00.709960 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:52:00.710103 | orchestrator | 2026-02-14 04:52:00.710120 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-14 04:52:01.527608 | orchestrator | ok: [testbed-manager] 2026-02-14 04:52:01.527707 | orchestrator | 2026-02-14 04:52:01.527722 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-14 04:52:01.616107 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:52:01.616191 | orchestrator | 2026-02-14 04:52:01.616202 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-14 04:52:01.616212 | orchestrator | 2026-02-14 04:52:01.616220 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-14 04:52:20.283226 | orchestrator | changed: [testbed-manager] 2026-02-14 04:52:20.283413 | orchestrator | 2026-02-14 04:52:20.283434 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-14 04:53:20.354408 | orchestrator | Pausing for 60 seconds 2026-02-14 04:53:20.354587 | orchestrator | changed: [testbed-manager] 2026-02-14 04:53:20.354603 | orchestrator | 2026-02-14 04:53:20.354617 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-14 04:53:20.436736 | orchestrator | ok: [testbed-manager] 2026-02-14 04:53:20.436890 | orchestrator | 2026-02-14 04:53:20.436916 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-14 04:53:23.936772 | orchestrator | changed: [testbed-manager] 2026-02-14 04:53:23.936910 | orchestrator | 2026-02-14 04:53:23.936928 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-14 04:54:26.325106 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-14 04:54:26.325232 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-14 04:54:26.325307 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-14 04:54:26.325330 | orchestrator | changed: [testbed-manager] 2026-02-14 04:54:26.325345 | orchestrator | 2026-02-14 04:54:26.325357 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-14 04:54:37.176463 | orchestrator | changed: [testbed-manager] 2026-02-14 04:54:37.176605 | orchestrator | 2026-02-14 04:54:37.176622 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-14 04:54:37.251044 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-14 04:54:37.251181 | orchestrator | 2026-02-14 04:54:37.251199 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-14 04:54:37.251212 | orchestrator | 2026-02-14 04:54:37.251223 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-14 04:54:37.320619 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:54:37.320720 | orchestrator | 2026-02-14 04:54:37.320736 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-14 04:54:37.396743 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-14 04:54:37.396849 | orchestrator | 2026-02-14 04:54:37.396892 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-14 04:54:38.464998 | orchestrator | changed: [testbed-manager] 2026-02-14 04:54:38.465099 | orchestrator | 2026-02-14 04:54:38.465116 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-14 04:54:42.092419 | orchestrator | ok: [testbed-manager] 2026-02-14 04:54:42.092523 | orchestrator | 2026-02-14 04:54:42.092539 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-14 04:54:42.186319 | orchestrator | ok: [testbed-manager] => { 2026-02-14 04:54:42.186413 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-14 04:54:42.186427 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-14 04:54:42.186437 | orchestrator | "Checking running containers against expected versions...", 2026-02-14 04:54:42.186448 | orchestrator | "", 2026-02-14 04:54:42.186458 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-14 04:54:42.186469 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-14 04:54:42.186479 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186489 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-14 04:54:42.186499 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186509 | orchestrator | "", 2026-02-14 04:54:42.186519 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-14 04:54:42.186529 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-14 04:54:42.186539 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186548 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-14 04:54:42.186558 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186567 | orchestrator | "", 2026-02-14 04:54:42.186577 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-14 04:54:42.186587 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-14 04:54:42.186596 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186606 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-14 04:54:42.186616 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186625 | orchestrator | "", 2026-02-14 04:54:42.186635 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-14 04:54:42.186644 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-14 04:54:42.186654 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186664 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-14 04:54:42.186673 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186683 | orchestrator | "", 2026-02-14 04:54:42.186693 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-14 04:54:42.186702 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-14 04:54:42.186712 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186721 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-14 04:54:42.186731 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186741 | orchestrator | "", 2026-02-14 04:54:42.186750 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-14 04:54:42.186782 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.186792 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186801 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.186811 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186821 | orchestrator | "", 2026-02-14 04:54:42.186830 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-14 04:54:42.186841 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-14 04:54:42.186852 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186864 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-14 04:54:42.186874 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186885 | orchestrator | "", 2026-02-14 04:54:42.186896 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-14 04:54:42.186907 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-14 04:54:42.186918 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.186939 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-14 04:54:42.186950 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.186961 | orchestrator | "", 2026-02-14 04:54:42.186972 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-14 04:54:42.186982 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-14 04:54:42.186991 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187001 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-14 04:54:42.187010 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187020 | orchestrator | "", 2026-02-14 04:54:42.187034 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-14 04:54:42.187044 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-14 04:54:42.187054 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187064 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-14 04:54:42.187073 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187083 | orchestrator | "", 2026-02-14 04:54:42.187093 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-14 04:54:42.187102 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187112 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187121 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187131 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187140 | orchestrator | "", 2026-02-14 04:54:42.187150 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-14 04:54:42.187159 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187169 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187179 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187188 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187198 | orchestrator | "", 2026-02-14 04:54:42.187207 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-14 04:54:42.187217 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187255 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187266 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187276 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187286 | orchestrator | "", 2026-02-14 04:54:42.187295 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-14 04:54:42.187305 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187314 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187324 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187350 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187361 | orchestrator | "", 2026-02-14 04:54:42.187370 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-14 04:54:42.187380 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187397 | orchestrator | " Enabled: true", 2026-02-14 04:54:42.187408 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-14 04:54:42.187417 | orchestrator | " Status: ✅ MATCH", 2026-02-14 04:54:42.187427 | orchestrator | "", 2026-02-14 04:54:42.187437 | orchestrator | "=== Summary ===", 2026-02-14 04:54:42.187446 | orchestrator | "Errors (version mismatches): 0", 2026-02-14 04:54:42.187456 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-14 04:54:42.187466 | orchestrator | "", 2026-02-14 04:54:42.187475 | orchestrator | "✅ All running containers match expected versions!" 2026-02-14 04:54:42.187485 | orchestrator | ] 2026-02-14 04:54:42.187495 | orchestrator | } 2026-02-14 04:54:42.187505 | orchestrator | 2026-02-14 04:54:42.187515 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-14 04:54:42.256719 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:54:42.256835 | orchestrator | 2026-02-14 04:54:42.256852 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:54:42.256864 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-14 04:54:42.256874 | orchestrator | 2026-02-14 04:54:54.710990 | orchestrator | 2026-02-14 04:54:54 | INFO  | Task 25814f9c-9223-4770-944d-5acdc694948b (sync inventory) is running in background. Output coming soon. 2026-02-14 04:55:22.836811 | orchestrator | 2026-02-14 04:54:56 | INFO  | Starting group_vars file reorganization 2026-02-14 04:55:22.836922 | orchestrator | 2026-02-14 04:54:56 | INFO  | Moved 0 file(s) to their respective directories 2026-02-14 04:55:22.836938 | orchestrator | 2026-02-14 04:54:56 | INFO  | Group_vars file reorganization completed 2026-02-14 04:55:22.836972 | orchestrator | 2026-02-14 04:54:59 | INFO  | Starting variable preparation from inventory 2026-02-14 04:55:22.836985 | orchestrator | 2026-02-14 04:55:01 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-14 04:55:22.836996 | orchestrator | 2026-02-14 04:55:01 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-14 04:55:22.837007 | orchestrator | 2026-02-14 04:55:01 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-14 04:55:22.837018 | orchestrator | 2026-02-14 04:55:01 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-14 04:55:22.837028 | orchestrator | 2026-02-14 04:55:01 | INFO  | Variable preparation completed 2026-02-14 04:55:22.837039 | orchestrator | 2026-02-14 04:55:03 | INFO  | Starting inventory overwrite handling 2026-02-14 04:55:22.837050 | orchestrator | 2026-02-14 04:55:03 | INFO  | Handling group overwrites in 99-overwrite 2026-02-14 04:55:22.837060 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removing group frr:children from 60-generic 2026-02-14 04:55:22.837071 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-14 04:55:22.837082 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-14 04:55:22.837093 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-14 04:55:22.837103 | orchestrator | 2026-02-14 04:55:03 | INFO  | Handling group overwrites in 20-roles 2026-02-14 04:55:22.837114 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-14 04:55:22.837125 | orchestrator | 2026-02-14 04:55:03 | INFO  | Removed 5 group(s) in total 2026-02-14 04:55:22.837135 | orchestrator | 2026-02-14 04:55:03 | INFO  | Inventory overwrite handling completed 2026-02-14 04:55:22.837146 | orchestrator | 2026-02-14 04:55:04 | INFO  | Starting merge of inventory files 2026-02-14 04:55:22.837156 | orchestrator | 2026-02-14 04:55:04 | INFO  | Inventory files merged successfully 2026-02-14 04:55:22.837191 | orchestrator | 2026-02-14 04:55:09 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-14 04:55:22.837253 | orchestrator | 2026-02-14 04:55:21 | INFO  | Successfully wrote ClusterShell configuration 2026-02-14 04:55:23.204128 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 04:55:23.204292 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-14 04:55:23.204317 | orchestrator | + local max_attempts=60 2026-02-14 04:55:23.204339 | orchestrator | + local name=kolla-ansible 2026-02-14 04:55:23.204358 | orchestrator | + local attempt_num=1 2026-02-14 04:55:23.204496 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-14 04:55:23.248495 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 04:55:23.248575 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-14 04:55:23.248588 | orchestrator | + local max_attempts=60 2026-02-14 04:55:23.248600 | orchestrator | + local name=osism-ansible 2026-02-14 04:55:23.248611 | orchestrator | + local attempt_num=1 2026-02-14 04:55:23.249384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-14 04:55:23.287160 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-14 04:55:23.287319 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-14 04:55:23.493375 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-14 04:55:23.493465 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-14 04:55:23.493478 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-14 04:55:23.493488 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-14 04:55:23.493503 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-14 04:55:23.493513 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-14 04:55:23.493523 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-14 04:55:23.493533 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up About a minute (healthy) 2026-02-14 04:55:23.493542 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 17 seconds ago 2026-02-14 04:55:23.493552 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-14 04:55:23.493561 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-14 04:55:23.493571 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-14 04:55:23.493580 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-14 04:55:23.493613 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-14 04:55:23.493623 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-14 04:55:23.493633 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-14 04:55:23.497821 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-14 04:55:23.497847 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-14 04:55:23.497857 | orchestrator | + osism apply facts 2026-02-14 04:55:35.640770 | orchestrator | 2026-02-14 04:55:35 | INFO  | Task 88dfb912-248b-4492-8fc2-8725f2049341 (facts) was prepared for execution. 2026-02-14 04:55:35.640876 | orchestrator | 2026-02-14 04:55:35 | INFO  | It takes a moment until task 88dfb912-248b-4492-8fc2-8725f2049341 (facts) has been started and output is visible here. 2026-02-14 04:55:58.895556 | orchestrator | 2026-02-14 04:55:58.895665 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-14 04:55:58.895682 | orchestrator | 2026-02-14 04:55:58.895692 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-14 04:55:58.895703 | orchestrator | Saturday 14 February 2026 04:55:42 +0000 (0:00:02.119) 0:00:02.119 ***** 2026-02-14 04:55:58.895713 | orchestrator | ok: [testbed-manager] 2026-02-14 04:55:58.895724 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:55:58.895734 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:55:58.895744 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:55:58.895753 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:55:58.895763 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:55:58.895773 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:55:58.895782 | orchestrator | 2026-02-14 04:55:58.895792 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-14 04:55:58.895802 | orchestrator | Saturday 14 February 2026 04:55:45 +0000 (0:00:03.710) 0:00:05.830 ***** 2026-02-14 04:55:58.895812 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:55:58.895823 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:55:58.895833 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:55:58.895842 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:55:58.895852 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:55:58.895861 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:55:58.895871 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:55:58.895880 | orchestrator | 2026-02-14 04:55:58.895890 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-14 04:55:58.895900 | orchestrator | 2026-02-14 04:55:58.895910 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-14 04:55:58.895919 | orchestrator | Saturday 14 February 2026 04:55:48 +0000 (0:00:02.521) 0:00:08.351 ***** 2026-02-14 04:55:58.895929 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:55:58.895958 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:55:58.895969 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:55:58.895979 | orchestrator | ok: [testbed-manager] 2026-02-14 04:55:58.895993 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:55:58.896003 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:55:58.896013 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:55:58.896022 | orchestrator | 2026-02-14 04:55:58.896032 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-14 04:55:58.896041 | orchestrator | 2026-02-14 04:55:58.896051 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-14 04:55:58.896061 | orchestrator | Saturday 14 February 2026 04:55:55 +0000 (0:00:07.288) 0:00:15.640 ***** 2026-02-14 04:55:58.896070 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:55:58.896102 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:55:58.896115 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:55:58.896127 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:55:58.896138 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:55:58.896149 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:55:58.896160 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:55:58.896171 | orchestrator | 2026-02-14 04:55:58.896212 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:55:58.896225 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896236 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896247 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896258 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896269 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896280 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896291 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:55:58.896302 | orchestrator | 2026-02-14 04:55:58.896312 | orchestrator | 2026-02-14 04:55:58.896323 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:55:58.896334 | orchestrator | Saturday 14 February 2026 04:55:58 +0000 (0:00:02.664) 0:00:18.304 ***** 2026-02-14 04:55:58.896345 | orchestrator | =============================================================================== 2026-02-14 04:55:58.896355 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.29s 2026-02-14 04:55:58.896366 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.71s 2026-02-14 04:55:58.896377 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.66s 2026-02-14 04:55:58.896388 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.52s 2026-02-14 04:55:59.224332 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-14 04:55:59.325365 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 04:55:59.325808 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-14 04:55:59.367999 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-14 04:55:59.368101 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-14 04:55:59.374721 | orchestrator | + set -e 2026-02-14 04:55:59.374817 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-14 04:55:59.374832 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-14 04:55:59.380981 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-14 04:55:59.390558 | orchestrator | 2026-02-14 04:55:59.390626 | orchestrator | # UPGRADE SERVICES 2026-02-14 04:55:59.390646 | orchestrator | 2026-02-14 04:55:59.390664 | orchestrator | + set -e 2026-02-14 04:55:59.390684 | orchestrator | + echo 2026-02-14 04:55:59.390703 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-14 04:55:59.390721 | orchestrator | + echo 2026-02-14 04:55:59.390740 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 04:55:59.390893 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 04:55:59.391810 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 04:55:59.391831 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 04:55:59.391844 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 04:55:59.391857 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 04:55:59.391872 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 04:55:59.391885 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:55:59.391925 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:55:59.391944 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 04:55:59.391962 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 04:55:59.391980 | orchestrator | ++ export ARA=false 2026-02-14 04:55:59.391997 | orchestrator | ++ ARA=false 2026-02-14 04:55:59.392013 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 04:55:59.392031 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 04:55:59.392051 | orchestrator | ++ export TEMPEST=false 2026-02-14 04:55:59.392069 | orchestrator | ++ TEMPEST=false 2026-02-14 04:55:59.392086 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 04:55:59.392104 | orchestrator | ++ IS_ZUUL=true 2026-02-14 04:55:59.392122 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:55:59.392140 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:55:59.392159 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 04:55:59.392177 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 04:55:59.392254 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 04:55:59.392272 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 04:55:59.392290 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 04:55:59.392302 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 04:55:59.392312 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 04:55:59.392323 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 04:55:59.392333 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-14 04:55:59.392344 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-14 04:55:59.392354 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-14 04:55:59.392364 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-14 04:55:59.392376 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-14 04:55:59.401994 | orchestrator | + set -e 2026-02-14 04:55:59.402143 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:55:59.403113 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:55:59.403160 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:55:59.403273 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:55:59.403298 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:55:59.403316 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 04:55:59.403335 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 04:55:59.403354 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 04:55:59.403373 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 04:55:59.403392 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 04:55:59.403412 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 04:55:59.403432 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 04:55:59.403475 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 04:55:59.403495 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 04:55:59.403514 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 04:55:59.403534 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 04:55:59.403554 | orchestrator | ++ export ARA=false 2026-02-14 04:55:59.403573 | orchestrator | ++ ARA=false 2026-02-14 04:55:59.403593 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 04:55:59.403613 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 04:55:59.403633 | orchestrator | ++ export TEMPEST=false 2026-02-14 04:55:59.403651 | orchestrator | ++ TEMPEST=false 2026-02-14 04:55:59.403669 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 04:55:59.403689 | orchestrator | ++ IS_ZUUL=true 2026-02-14 04:55:59.403709 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:55:59.403730 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 04:55:59.403751 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 04:55:59.403771 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 04:55:59.403792 | orchestrator | 2026-02-14 04:55:59.403812 | orchestrator | # PULL IMAGES 2026-02-14 04:55:59.403832 | orchestrator | 2026-02-14 04:55:59.403852 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 04:55:59.403871 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 04:55:59.403892 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 04:55:59.403912 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 04:55:59.403932 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 04:55:59.403952 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 04:55:59.403970 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-14 04:55:59.403988 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-14 04:55:59.404008 | orchestrator | + echo 2026-02-14 04:55:59.404027 | orchestrator | + echo '# PULL IMAGES' 2026-02-14 04:55:59.404047 | orchestrator | + echo 2026-02-14 04:55:59.404415 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-14 04:55:59.477415 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 04:55:59.477524 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-14 04:56:01.509534 | orchestrator | 2026-02-14 04:56:01 | INFO  | Trying to run play pull-images in environment custom 2026-02-14 04:56:11.722538 | orchestrator | 2026-02-14 04:56:11 | INFO  | Task b2f5ac3d-598c-4cd2-8362-a32094782b5a (pull-images) was prepared for execution. 2026-02-14 04:56:11.725594 | orchestrator | 2026-02-14 04:56:11 | INFO  | Task b2f5ac3d-598c-4cd2-8362-a32094782b5a is running in background. No more output. Check ARA for logs. 2026-02-14 04:56:12.040140 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-14 04:56:12.052516 | orchestrator | + set -e 2026-02-14 04:56:12.052606 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 04:56:12.052622 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 04:56:12.052634 | orchestrator | ++ INTERACTIVE=false 2026-02-14 04:56:12.052645 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 04:56:12.052656 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 04:56:12.052667 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-14 04:56:12.055445 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-14 04:56:12.062284 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:56:12.062353 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-14 04:56:12.063722 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-14 04:56:12.127765 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-14 04:56:12.127851 | orchestrator | + osism apply frr 2026-02-14 04:56:24.296253 | orchestrator | 2026-02-14 04:56:24 | INFO  | Task db13114a-2a36-425e-a6c8-45c4b43317ad (frr) was prepared for execution. 2026-02-14 04:56:24.296364 | orchestrator | 2026-02-14 04:56:24 | INFO  | It takes a moment until task db13114a-2a36-425e-a6c8-45c4b43317ad (frr) has been started and output is visible here. 2026-02-14 04:56:56.959616 | orchestrator | 2026-02-14 04:56:56.959750 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-14 04:56:56.959769 | orchestrator | 2026-02-14 04:56:56.959781 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-14 04:56:56.959794 | orchestrator | Saturday 14 February 2026 04:56:32 +0000 (0:00:03.234) 0:00:03.234 ***** 2026-02-14 04:56:56.959814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 04:56:56.959828 | orchestrator | 2026-02-14 04:56:56.959839 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-14 04:56:56.959851 | orchestrator | Saturday 14 February 2026 04:56:34 +0000 (0:00:02.618) 0:00:05.853 ***** 2026-02-14 04:56:56.959861 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.959874 | orchestrator | 2026-02-14 04:56:56.959885 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-14 04:56:56.959896 | orchestrator | Saturday 14 February 2026 04:56:37 +0000 (0:00:02.337) 0:00:08.191 ***** 2026-02-14 04:56:56.959907 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.959917 | orchestrator | 2026-02-14 04:56:56.959928 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-14 04:56:56.959939 | orchestrator | Saturday 14 February 2026 04:56:40 +0000 (0:00:03.030) 0:00:11.221 ***** 2026-02-14 04:56:56.959950 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.959961 | orchestrator | 2026-02-14 04:56:56.959973 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-14 04:56:56.959984 | orchestrator | Saturday 14 February 2026 04:56:41 +0000 (0:00:01.867) 0:00:13.089 ***** 2026-02-14 04:56:56.959995 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.960005 | orchestrator | 2026-02-14 04:56:56.960016 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-14 04:56:56.960027 | orchestrator | Saturday 14 February 2026 04:56:43 +0000 (0:00:01.924) 0:00:15.013 ***** 2026-02-14 04:56:56.960038 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.960049 | orchestrator | 2026-02-14 04:56:56.960060 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-14 04:56:56.960071 | orchestrator | Saturday 14 February 2026 04:56:46 +0000 (0:00:02.430) 0:00:17.444 ***** 2026-02-14 04:56:56.960082 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:56:56.960118 | orchestrator | 2026-02-14 04:56:56.960130 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-14 04:56:56.960171 | orchestrator | Saturday 14 February 2026 04:56:47 +0000 (0:00:01.124) 0:00:18.568 ***** 2026-02-14 04:56:56.960194 | orchestrator | skipping: [testbed-manager] 2026-02-14 04:56:56.960215 | orchestrator | 2026-02-14 04:56:56.960235 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-14 04:56:56.960253 | orchestrator | Saturday 14 February 2026 04:56:48 +0000 (0:00:01.132) 0:00:19.701 ***** 2026-02-14 04:56:56.960266 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.960278 | orchestrator | 2026-02-14 04:56:56.960291 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-14 04:56:56.960303 | orchestrator | Saturday 14 February 2026 04:56:50 +0000 (0:00:01.965) 0:00:21.667 ***** 2026-02-14 04:56:56.960315 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-14 04:56:56.960327 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-14 04:56:56.960342 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-14 04:56:56.960354 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-14 04:56:56.960366 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-14 04:56:56.960379 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-14 04:56:56.960392 | orchestrator | 2026-02-14 04:56:56.960423 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-14 04:56:56.960437 | orchestrator | Saturday 14 February 2026 04:56:54 +0000 (0:00:03.570) 0:00:25.238 ***** 2026-02-14 04:56:56.960450 | orchestrator | ok: [testbed-manager] 2026-02-14 04:56:56.960462 | orchestrator | 2026-02-14 04:56:56.960475 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 04:56:56.960488 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 04:56:56.960501 | orchestrator | 2026-02-14 04:56:56.960512 | orchestrator | 2026-02-14 04:56:56.960523 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 04:56:56.960533 | orchestrator | Saturday 14 February 2026 04:56:56 +0000 (0:00:02.530) 0:00:27.768 ***** 2026-02-14 04:56:56.960544 | orchestrator | =============================================================================== 2026-02-14 04:56:56.960555 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.57s 2026-02-14 04:56:56.960565 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.03s 2026-02-14 04:56:56.960576 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.62s 2026-02-14 04:56:56.960586 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.53s 2026-02-14 04:56:56.960597 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.43s 2026-02-14 04:56:56.960607 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.34s 2026-02-14 04:56:56.960618 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.97s 2026-02-14 04:56:56.960629 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.92s 2026-02-14 04:56:56.960658 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.87s 2026-02-14 04:56:56.960670 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.13s 2026-02-14 04:56:56.960681 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.12s 2026-02-14 04:56:57.304828 | orchestrator | + osism apply kubernetes 2026-02-14 04:56:59.492311 | orchestrator | 2026-02-14 04:56:59 | INFO  | Task 47d00d3b-1bde-49f9-b6b9-a78ba2a50c5b (kubernetes) was prepared for execution. 2026-02-14 04:56:59.492430 | orchestrator | 2026-02-14 04:56:59 | INFO  | It takes a moment until task 47d00d3b-1bde-49f9-b6b9-a78ba2a50c5b (kubernetes) has been started and output is visible here. 2026-02-14 04:57:43.843853 | orchestrator | 2026-02-14 04:57:43.843978 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-14 04:57:43.843994 | orchestrator | 2026-02-14 04:57:43.844007 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-14 04:57:43.844020 | orchestrator | Saturday 14 February 2026 04:57:06 +0000 (0:00:02.173) 0:00:02.173 ***** 2026-02-14 04:57:43.844031 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.844043 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.844054 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.844065 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.844076 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.844086 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.844097 | orchestrator | 2026-02-14 04:57:43.844109 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-14 04:57:43.844150 | orchestrator | Saturday 14 February 2026 04:57:10 +0000 (0:00:04.424) 0:00:06.599 ***** 2026-02-14 04:57:43.844162 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.844174 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.844185 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.844196 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.844206 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.844217 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.844228 | orchestrator | 2026-02-14 04:57:43.844239 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-14 04:57:43.844250 | orchestrator | Saturday 14 February 2026 04:57:12 +0000 (0:00:02.126) 0:00:08.725 ***** 2026-02-14 04:57:43.844261 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.844272 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.844282 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.844293 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.844303 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.844314 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.844325 | orchestrator | 2026-02-14 04:57:43.844336 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-14 04:57:43.844347 | orchestrator | Saturday 14 February 2026 04:57:14 +0000 (0:00:02.060) 0:00:10.786 ***** 2026-02-14 04:57:43.844358 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.844369 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.844382 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.844394 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.844406 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.844419 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.844432 | orchestrator | 2026-02-14 04:57:43.844444 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-14 04:57:43.844456 | orchestrator | Saturday 14 February 2026 04:57:17 +0000 (0:00:03.011) 0:00:13.797 ***** 2026-02-14 04:57:43.844468 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.844480 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.844493 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.844505 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.844517 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.844528 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.844540 | orchestrator | 2026-02-14 04:57:43.844553 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-14 04:57:43.844565 | orchestrator | Saturday 14 February 2026 04:57:20 +0000 (0:00:03.077) 0:00:16.874 ***** 2026-02-14 04:57:43.844577 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.844589 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.844602 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.844614 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.844626 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.844674 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.844686 | orchestrator | 2026-02-14 04:57:43.844704 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-14 04:57:43.844723 | orchestrator | Saturday 14 February 2026 04:57:22 +0000 (0:00:02.060) 0:00:18.935 ***** 2026-02-14 04:57:43.844741 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.844758 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.844776 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.844810 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.844843 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.844861 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.844879 | orchestrator | 2026-02-14 04:57:43.844898 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-14 04:57:43.844916 | orchestrator | Saturday 14 February 2026 04:57:25 +0000 (0:00:02.107) 0:00:21.042 ***** 2026-02-14 04:57:43.844934 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.844945 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.844956 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.844966 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.844977 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.844987 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.844998 | orchestrator | 2026-02-14 04:57:43.845008 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-14 04:57:43.845019 | orchestrator | Saturday 14 February 2026 04:57:26 +0000 (0:00:01.793) 0:00:22.836 ***** 2026-02-14 04:57:43.845029 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845040 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845050 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.845061 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845082 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845093 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.845104 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845203 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845229 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.845241 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845251 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845262 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.845293 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845305 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845315 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.845326 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-14 04:57:43.845337 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-14 04:57:43.845347 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.845358 | orchestrator | 2026-02-14 04:57:43.845368 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-14 04:57:43.845379 | orchestrator | Saturday 14 February 2026 04:57:28 +0000 (0:00:02.045) 0:00:24.881 ***** 2026-02-14 04:57:43.845389 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.845399 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.845410 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.845420 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.845431 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.845441 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.845451 | orchestrator | 2026-02-14 04:57:43.845474 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-14 04:57:43.845487 | orchestrator | Saturday 14 February 2026 04:57:31 +0000 (0:00:02.322) 0:00:27.204 ***** 2026-02-14 04:57:43.845497 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.845508 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.845518 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.845529 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.845539 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.845550 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.845560 | orchestrator | 2026-02-14 04:57:43.845571 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-14 04:57:43.845582 | orchestrator | Saturday 14 February 2026 04:57:33 +0000 (0:00:01.937) 0:00:29.142 ***** 2026-02-14 04:57:43.845592 | orchestrator | ok: [testbed-node-3] 2026-02-14 04:57:43.845603 | orchestrator | ok: [testbed-node-5] 2026-02-14 04:57:43.845613 | orchestrator | ok: [testbed-node-4] 2026-02-14 04:57:43.845623 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:57:43.845639 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:57:43.845650 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:57:43.845661 | orchestrator | 2026-02-14 04:57:43.845672 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-14 04:57:43.845682 | orchestrator | Saturday 14 February 2026 04:57:35 +0000 (0:00:02.590) 0:00:31.732 ***** 2026-02-14 04:57:43.845693 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.845703 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.845714 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.845723 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.845733 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.845742 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.845751 | orchestrator | 2026-02-14 04:57:43.845761 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-14 04:57:43.845770 | orchestrator | Saturday 14 February 2026 04:57:37 +0000 (0:00:01.712) 0:00:33.445 ***** 2026-02-14 04:57:43.845780 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.845789 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.845798 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.845807 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.845817 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.845826 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.845835 | orchestrator | 2026-02-14 04:57:43.845845 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-14 04:57:43.845856 | orchestrator | Saturday 14 February 2026 04:57:39 +0000 (0:00:02.202) 0:00:35.648 ***** 2026-02-14 04:57:43.845865 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.845874 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.845885 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.845902 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.845918 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.845933 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.845950 | orchestrator | 2026-02-14 04:57:43.845970 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-14 04:57:43.845987 | orchestrator | Saturday 14 February 2026 04:57:41 +0000 (0:00:01.808) 0:00:37.456 ***** 2026-02-14 04:57:43.846001 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-14 04:57:43.846085 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-14 04:57:43.846106 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.846143 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-14 04:57:43.846158 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-14 04:57:43.846230 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.846249 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-14 04:57:43.846265 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-14 04:57:43.846286 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:57:43.846296 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-14 04:57:43.846305 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-14 04:57:43.846314 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:57:43.846324 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-14 04:57:43.846333 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-14 04:57:43.846343 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:57:43.846352 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-14 04:57:43.846361 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-14 04:57:43.846370 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:57:43.846380 | orchestrator | 2026-02-14 04:57:43.846389 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-14 04:57:43.846399 | orchestrator | Saturday 14 February 2026 04:57:43 +0000 (0:00:01.972) 0:00:39.429 ***** 2026-02-14 04:57:43.846408 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:57:43.846418 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:57:43.846441 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:59:34.860610 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.860729 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.860745 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.860757 | orchestrator | 2026-02-14 04:59:34.860770 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-14 04:59:34.860783 | orchestrator | Saturday 14 February 2026 04:57:45 +0000 (0:00:01.798) 0:00:41.227 ***** 2026-02-14 04:59:34.860795 | orchestrator | skipping: [testbed-node-3] 2026-02-14 04:59:34.860805 | orchestrator | skipping: [testbed-node-4] 2026-02-14 04:59:34.860816 | orchestrator | skipping: [testbed-node-5] 2026-02-14 04:59:34.860827 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.860837 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.860848 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.860859 | orchestrator | 2026-02-14 04:59:34.860870 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-14 04:59:34.860881 | orchestrator | 2026-02-14 04:59:34.860892 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-14 04:59:34.860904 | orchestrator | Saturday 14 February 2026 04:57:47 +0000 (0:00:02.680) 0:00:43.908 ***** 2026-02-14 04:59:34.860914 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.860926 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.860937 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.860948 | orchestrator | 2026-02-14 04:59:34.860959 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-14 04:59:34.860970 | orchestrator | Saturday 14 February 2026 04:57:49 +0000 (0:00:01.876) 0:00:45.785 ***** 2026-02-14 04:59:34.860981 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.860992 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.861003 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861013 | orchestrator | 2026-02-14 04:59:34.861024 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-14 04:59:34.861035 | orchestrator | Saturday 14 February 2026 04:57:51 +0000 (0:00:02.170) 0:00:47.956 ***** 2026-02-14 04:59:34.861046 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.861096 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:59:34.861112 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:59:34.861124 | orchestrator | 2026-02-14 04:59:34.861157 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-14 04:59:34.861171 | orchestrator | Saturday 14 February 2026 04:57:54 +0000 (0:00:02.339) 0:00:50.295 ***** 2026-02-14 04:59:34.861183 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.861195 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861208 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.861220 | orchestrator | 2026-02-14 04:59:34.861254 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-14 04:59:34.861268 | orchestrator | Saturday 14 February 2026 04:57:56 +0000 (0:00:02.243) 0:00:52.539 ***** 2026-02-14 04:59:34.861281 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.861293 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861306 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861319 | orchestrator | 2026-02-14 04:59:34.861331 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-14 04:59:34.861345 | orchestrator | Saturday 14 February 2026 04:57:58 +0000 (0:00:01.572) 0:00:54.111 ***** 2026-02-14 04:59:34.861358 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.861370 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861383 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.861395 | orchestrator | 2026-02-14 04:59:34.861407 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-14 04:59:34.861418 | orchestrator | Saturday 14 February 2026 04:57:59 +0000 (0:00:01.666) 0:00:55.778 ***** 2026-02-14 04:59:34.861430 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.861441 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.861451 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861462 | orchestrator | 2026-02-14 04:59:34.861473 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-14 04:59:34.861484 | orchestrator | Saturday 14 February 2026 04:58:01 +0000 (0:00:02.161) 0:00:57.940 ***** 2026-02-14 04:59:34.861495 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 04:59:34.861506 | orchestrator | 2026-02-14 04:59:34.861516 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-14 04:59:34.861527 | orchestrator | Saturday 14 February 2026 04:58:03 +0000 (0:00:01.959) 0:00:59.899 ***** 2026-02-14 04:59:34.861538 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861549 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.861560 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.861571 | orchestrator | 2026-02-14 04:59:34.861581 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-14 04:59:34.861592 | orchestrator | Saturday 14 February 2026 04:58:06 +0000 (0:00:02.467) 0:01:02.366 ***** 2026-02-14 04:59:34.861603 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861614 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.861624 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861635 | orchestrator | 2026-02-14 04:59:34.861646 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-14 04:59:34.861656 | orchestrator | Saturday 14 February 2026 04:58:07 +0000 (0:00:01.630) 0:01:03.996 ***** 2026-02-14 04:59:34.861667 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861678 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861689 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.861700 | orchestrator | 2026-02-14 04:59:34.861710 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-14 04:59:34.861721 | orchestrator | Saturday 14 February 2026 04:58:09 +0000 (0:00:01.826) 0:01:05.823 ***** 2026-02-14 04:59:34.861732 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861743 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861753 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.861764 | orchestrator | 2026-02-14 04:59:34.861774 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-14 04:59:34.861785 | orchestrator | Saturday 14 February 2026 04:58:12 +0000 (0:00:02.460) 0:01:08.283 ***** 2026-02-14 04:59:34.861796 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.861807 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861835 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861847 | orchestrator | 2026-02-14 04:59:34.861858 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-14 04:59:34.861869 | orchestrator | Saturday 14 February 2026 04:58:13 +0000 (0:00:01.409) 0:01:09.693 ***** 2026-02-14 04:59:34.861887 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.861898 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.861909 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.861920 | orchestrator | 2026-02-14 04:59:34.861930 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-14 04:59:34.861941 | orchestrator | Saturday 14 February 2026 04:58:15 +0000 (0:00:01.687) 0:01:11.380 ***** 2026-02-14 04:59:34.861952 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.861968 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:59:34.861986 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:59:34.862005 | orchestrator | 2026-02-14 04:59:34.862110 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-14 04:59:34.862124 | orchestrator | Saturday 14 February 2026 04:58:17 +0000 (0:00:02.223) 0:01:13.604 ***** 2026-02-14 04:59:34.862135 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862145 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862156 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862167 | orchestrator | 2026-02-14 04:59:34.862178 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-14 04:59:34.862188 | orchestrator | Saturday 14 February 2026 04:58:19 +0000 (0:00:01.899) 0:01:15.503 ***** 2026-02-14 04:59:34.862199 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862210 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862220 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862231 | orchestrator | 2026-02-14 04:59:34.862241 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-14 04:59:34.862253 | orchestrator | Saturday 14 February 2026 04:58:20 +0000 (0:00:01.351) 0:01:16.854 ***** 2026-02-14 04:59:34.862264 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 04:59:34.862277 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 04:59:34.862288 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-14 04:59:34.862299 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 04:59:34.862310 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 04:59:34.862320 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-14 04:59:34.862331 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862342 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862352 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862363 | orchestrator | 2026-02-14 04:59:34.862374 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-14 04:59:34.862385 | orchestrator | Saturday 14 February 2026 04:58:44 +0000 (0:00:23.342) 0:01:40.197 ***** 2026-02-14 04:59:34.862395 | orchestrator | skipping: [testbed-node-0] 2026-02-14 04:59:34.862406 | orchestrator | skipping: [testbed-node-1] 2026-02-14 04:59:34.862417 | orchestrator | skipping: [testbed-node-2] 2026-02-14 04:59:34.862428 | orchestrator | 2026-02-14 04:59:34.862438 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-14 04:59:34.862449 | orchestrator | Saturday 14 February 2026 04:58:45 +0000 (0:00:01.362) 0:01:41.559 ***** 2026-02-14 04:59:34.862460 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.862470 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:59:34.862481 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:59:34.862492 | orchestrator | 2026-02-14 04:59:34.862503 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-14 04:59:34.862523 | orchestrator | Saturday 14 February 2026 04:58:47 +0000 (0:00:02.140) 0:01:43.699 ***** 2026-02-14 04:59:34.862534 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862544 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862555 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862571 | orchestrator | 2026-02-14 04:59:34.862591 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-14 04:59:34.862611 | orchestrator | Saturday 14 February 2026 04:58:49 +0000 (0:00:02.243) 0:01:45.943 ***** 2026-02-14 04:59:34.862629 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:59:34.862646 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.862657 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:59:34.862668 | orchestrator | 2026-02-14 04:59:34.862679 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-14 04:59:34.862689 | orchestrator | Saturday 14 February 2026 04:59:29 +0000 (0:00:39.605) 0:02:25.549 ***** 2026-02-14 04:59:34.862700 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862711 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862721 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862732 | orchestrator | 2026-02-14 04:59:34.862744 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-14 04:59:34.862763 | orchestrator | Saturday 14 February 2026 04:59:31 +0000 (0:00:01.745) 0:02:27.295 ***** 2026-02-14 04:59:34.862780 | orchestrator | ok: [testbed-node-0] 2026-02-14 04:59:34.862797 | orchestrator | ok: [testbed-node-1] 2026-02-14 04:59:34.862815 | orchestrator | ok: [testbed-node-2] 2026-02-14 04:59:34.862832 | orchestrator | 2026-02-14 04:59:34.862851 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-14 04:59:34.862869 | orchestrator | Saturday 14 February 2026 04:59:32 +0000 (0:00:01.650) 0:02:28.945 ***** 2026-02-14 04:59:34.862888 | orchestrator | changed: [testbed-node-0] 2026-02-14 04:59:34.862900 | orchestrator | changed: [testbed-node-1] 2026-02-14 04:59:34.862911 | orchestrator | changed: [testbed-node-2] 2026-02-14 04:59:34.862921 | orchestrator | 2026-02-14 04:59:34.862943 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-14 05:00:22.481945 | orchestrator | Saturday 14 February 2026 04:59:34 +0000 (0:00:01.902) 0:02:30.848 ***** 2026-02-14 05:00:22.482302 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:00:22.482326 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:00:22.482338 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:00:22.482350 | orchestrator | 2026-02-14 05:00:22.482405 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-14 05:00:22.482424 | orchestrator | Saturday 14 February 2026 04:59:36 +0000 (0:00:01.675) 0:02:32.524 ***** 2026-02-14 05:00:22.482445 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:00:22.482464 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:00:22.482482 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:00:22.482501 | orchestrator | 2026-02-14 05:00:22.482518 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-14 05:00:22.482537 | orchestrator | Saturday 14 February 2026 04:59:37 +0000 (0:00:01.328) 0:02:33.853 ***** 2026-02-14 05:00:22.482557 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:00:22.482576 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:00:22.482587 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:00:22.482598 | orchestrator | 2026-02-14 05:00:22.482609 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-14 05:00:22.482620 | orchestrator | Saturday 14 February 2026 04:59:39 +0000 (0:00:01.720) 0:02:35.573 ***** 2026-02-14 05:00:22.482632 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:00:22.482643 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:00:22.482654 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:00:22.482665 | orchestrator | 2026-02-14 05:00:22.482676 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-14 05:00:22.482687 | orchestrator | Saturday 14 February 2026 04:59:41 +0000 (0:00:01.979) 0:02:37.552 ***** 2026-02-14 05:00:22.482698 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:00:22.482732 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:00:22.482744 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:00:22.482754 | orchestrator | 2026-02-14 05:00:22.482765 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-14 05:00:22.482785 | orchestrator | Saturday 14 February 2026 04:59:43 +0000 (0:00:01.776) 0:02:39.329 ***** 2026-02-14 05:00:22.482796 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:00:22.482807 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:00:22.482817 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:00:22.482828 | orchestrator | 2026-02-14 05:00:22.482839 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-14 05:00:22.482850 | orchestrator | Saturday 14 February 2026 04:59:45 +0000 (0:00:01.987) 0:02:41.316 ***** 2026-02-14 05:00:22.482861 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:00:22.482871 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:00:22.482882 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:00:22.482892 | orchestrator | 2026-02-14 05:00:22.482903 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-14 05:00:22.482914 | orchestrator | Saturday 14 February 2026 04:59:46 +0000 (0:00:01.350) 0:02:42.667 ***** 2026-02-14 05:00:22.482925 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:00:22.482935 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:00:22.482946 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:00:22.482957 | orchestrator | 2026-02-14 05:00:22.482968 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-14 05:00:22.482979 | orchestrator | Saturday 14 February 2026 04:59:48 +0000 (0:00:01.370) 0:02:44.037 ***** 2026-02-14 05:00:22.482989 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:00:22.483000 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:00:22.483011 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:00:22.483022 | orchestrator | 2026-02-14 05:00:22.483083 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-14 05:00:22.483105 | orchestrator | Saturday 14 February 2026 04:59:49 +0000 (0:00:01.666) 0:02:45.704 ***** 2026-02-14 05:00:22.483124 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:00:22.483168 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:00:22.483180 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:00:22.483191 | orchestrator | 2026-02-14 05:00:22.483223 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-14 05:00:22.483237 | orchestrator | Saturday 14 February 2026 04:59:51 +0000 (0:00:01.699) 0:02:47.403 ***** 2026-02-14 05:00:22.483248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 05:00:22.483259 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 05:00:22.483270 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 05:00:22.483281 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-14 05:00:22.483292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 05:00:22.483302 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 05:00:22.483314 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 05:00:22.483325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-14 05:00:22.483336 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-14 05:00:22.483347 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-14 05:00:22.483367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 05:00:22.483400 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-14 05:00:22.483446 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 05:00:22.483467 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 05:00:22.483489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-14 05:00:22.483509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 05:00:22.483524 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 05:00:22.483535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-14 05:00:22.483546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 05:00:22.483563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-14 05:00:22.483582 | orchestrator | 2026-02-14 05:00:22.483601 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-14 05:00:22.483619 | orchestrator | 2026-02-14 05:00:22.483637 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-14 05:00:22.483656 | orchestrator | Saturday 14 February 2026 04:59:55 +0000 (0:00:04.352) 0:02:51.756 ***** 2026-02-14 05:00:22.483672 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.483692 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.483709 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.483726 | orchestrator | 2026-02-14 05:00:22.483744 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-14 05:00:22.483762 | orchestrator | Saturday 14 February 2026 04:59:57 +0000 (0:00:01.386) 0:02:53.143 ***** 2026-02-14 05:00:22.483782 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.483799 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.483818 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.483837 | orchestrator | 2026-02-14 05:00:22.483857 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-14 05:00:22.483877 | orchestrator | Saturday 14 February 2026 04:59:58 +0000 (0:00:01.683) 0:02:54.826 ***** 2026-02-14 05:00:22.483896 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.483914 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.483928 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.483939 | orchestrator | 2026-02-14 05:00:22.483950 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-14 05:00:22.483981 | orchestrator | Saturday 14 February 2026 05:00:00 +0000 (0:00:01.704) 0:02:56.531 ***** 2026-02-14 05:00:22.484002 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:00:22.484021 | orchestrator | 2026-02-14 05:00:22.484065 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-14 05:00:22.484083 | orchestrator | Saturday 14 February 2026 05:00:02 +0000 (0:00:01.664) 0:02:58.195 ***** 2026-02-14 05:00:22.484102 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:00:22.484121 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:00:22.484141 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:00:22.484160 | orchestrator | 2026-02-14 05:00:22.484176 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-14 05:00:22.484187 | orchestrator | Saturday 14 February 2026 05:00:03 +0000 (0:00:01.392) 0:02:59.588 ***** 2026-02-14 05:00:22.484198 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:00:22.484209 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:00:22.484219 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:00:22.484230 | orchestrator | 2026-02-14 05:00:22.484241 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-14 05:00:22.484252 | orchestrator | Saturday 14 February 2026 05:00:05 +0000 (0:00:01.438) 0:03:01.027 ***** 2026-02-14 05:00:22.484275 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:00:22.484285 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:00:22.484296 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:00:22.484307 | orchestrator | 2026-02-14 05:00:22.484317 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-14 05:00:22.484328 | orchestrator | Saturday 14 February 2026 05:00:06 +0000 (0:00:01.347) 0:03:02.374 ***** 2026-02-14 05:00:22.484339 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.484350 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.484361 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.484371 | orchestrator | 2026-02-14 05:00:22.484382 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-14 05:00:22.484393 | orchestrator | Saturday 14 February 2026 05:00:08 +0000 (0:00:01.650) 0:03:04.025 ***** 2026-02-14 05:00:22.484403 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.484414 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.484440 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.484451 | orchestrator | 2026-02-14 05:00:22.484462 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-14 05:00:22.484473 | orchestrator | Saturday 14 February 2026 05:00:10 +0000 (0:00:02.254) 0:03:06.279 ***** 2026-02-14 05:00:22.484484 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:00:22.484503 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:00:22.484523 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:00:22.484542 | orchestrator | 2026-02-14 05:00:22.484562 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-14 05:00:22.484581 | orchestrator | Saturday 14 February 2026 05:00:12 +0000 (0:00:02.216) 0:03:08.496 ***** 2026-02-14 05:00:22.484612 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:00:22.484635 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:00:22.484656 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:00:22.484677 | orchestrator | 2026-02-14 05:00:22.484697 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-14 05:00:22.484718 | orchestrator | 2026-02-14 05:00:22.484738 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-14 05:00:22.484759 | orchestrator | Saturday 14 February 2026 05:00:20 +0000 (0:00:07.849) 0:03:16.345 ***** 2026-02-14 05:00:22.484779 | orchestrator | ok: [testbed-manager] 2026-02-14 05:00:22.484797 | orchestrator | 2026-02-14 05:00:22.484816 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-14 05:00:22.484850 | orchestrator | Saturday 14 February 2026 05:00:22 +0000 (0:00:02.133) 0:03:18.478 ***** 2026-02-14 05:01:33.117585 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.117733 | orchestrator | 2026-02-14 05:01:33.117764 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-14 05:01:33.117784 | orchestrator | Saturday 14 February 2026 05:00:23 +0000 (0:00:01.427) 0:03:19.906 ***** 2026-02-14 05:01:33.117802 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-14 05:01:33.117820 | orchestrator | 2026-02-14 05:01:33.117838 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-14 05:01:33.117854 | orchestrator | Saturday 14 February 2026 05:00:25 +0000 (0:00:01.658) 0:03:21.564 ***** 2026-02-14 05:01:33.117873 | orchestrator | changed: [testbed-manager] 2026-02-14 05:01:33.117891 | orchestrator | 2026-02-14 05:01:33.117908 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-14 05:01:33.117925 | orchestrator | Saturday 14 February 2026 05:00:27 +0000 (0:00:01.949) 0:03:23.514 ***** 2026-02-14 05:01:33.117942 | orchestrator | changed: [testbed-manager] 2026-02-14 05:01:33.117961 | orchestrator | 2026-02-14 05:01:33.118157 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-14 05:01:33.118184 | orchestrator | Saturday 14 February 2026 05:00:29 +0000 (0:00:01.895) 0:03:25.410 ***** 2026-02-14 05:01:33.118197 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-14 05:01:33.118239 | orchestrator | 2026-02-14 05:01:33.118252 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-14 05:01:33.118265 | orchestrator | Saturday 14 February 2026 05:00:32 +0000 (0:00:02.967) 0:03:28.377 ***** 2026-02-14 05:01:33.118278 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-14 05:01:33.118290 | orchestrator | 2026-02-14 05:01:33.118303 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-14 05:01:33.118316 | orchestrator | Saturday 14 February 2026 05:00:34 +0000 (0:00:01.848) 0:03:30.225 ***** 2026-02-14 05:01:33.118343 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118356 | orchestrator | 2026-02-14 05:01:33.118369 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-14 05:01:33.118382 | orchestrator | Saturday 14 February 2026 05:00:35 +0000 (0:00:01.459) 0:03:31.685 ***** 2026-02-14 05:01:33.118395 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118408 | orchestrator | 2026-02-14 05:01:33.118420 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-14 05:01:33.118434 | orchestrator | 2026-02-14 05:01:33.118447 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-14 05:01:33.118460 | orchestrator | Saturday 14 February 2026 05:00:37 +0000 (0:00:01.568) 0:03:33.254 ***** 2026-02-14 05:01:33.118472 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118482 | orchestrator | 2026-02-14 05:01:33.118493 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-14 05:01:33.118504 | orchestrator | Saturday 14 February 2026 05:00:38 +0000 (0:00:01.165) 0:03:34.420 ***** 2026-02-14 05:01:33.118515 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 05:01:33.118527 | orchestrator | 2026-02-14 05:01:33.118537 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-14 05:01:33.118548 | orchestrator | Saturday 14 February 2026 05:00:39 +0000 (0:00:01.501) 0:03:35.921 ***** 2026-02-14 05:01:33.118559 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118570 | orchestrator | 2026-02-14 05:01:33.118580 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-14 05:01:33.118591 | orchestrator | Saturday 14 February 2026 05:00:41 +0000 (0:00:01.899) 0:03:37.820 ***** 2026-02-14 05:01:33.118601 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118612 | orchestrator | 2026-02-14 05:01:33.118623 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-14 05:01:33.118634 | orchestrator | Saturday 14 February 2026 05:00:44 +0000 (0:00:02.653) 0:03:40.474 ***** 2026-02-14 05:01:33.118645 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118655 | orchestrator | 2026-02-14 05:01:33.118674 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-14 05:01:33.118693 | orchestrator | Saturday 14 February 2026 05:00:45 +0000 (0:00:01.436) 0:03:41.910 ***** 2026-02-14 05:01:33.118710 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118729 | orchestrator | 2026-02-14 05:01:33.118746 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-14 05:01:33.118762 | orchestrator | Saturday 14 February 2026 05:00:47 +0000 (0:00:01.467) 0:03:43.378 ***** 2026-02-14 05:01:33.118781 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118792 | orchestrator | 2026-02-14 05:01:33.118803 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-14 05:01:33.118814 | orchestrator | Saturday 14 February 2026 05:00:48 +0000 (0:00:01.626) 0:03:45.004 ***** 2026-02-14 05:01:33.118824 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118836 | orchestrator | 2026-02-14 05:01:33.118846 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-14 05:01:33.118857 | orchestrator | Saturday 14 February 2026 05:00:51 +0000 (0:00:02.468) 0:03:47.473 ***** 2026-02-14 05:01:33.118868 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:33.118878 | orchestrator | 2026-02-14 05:01:33.118889 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-14 05:01:33.118908 | orchestrator | 2026-02-14 05:01:33.118919 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-14 05:01:33.118930 | orchestrator | Saturday 14 February 2026 05:00:53 +0000 (0:00:01.745) 0:03:49.218 ***** 2026-02-14 05:01:33.118941 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:01:33.118952 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:01:33.118963 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:01:33.118973 | orchestrator | 2026-02-14 05:01:33.119005 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-14 05:01:33.119017 | orchestrator | Saturday 14 February 2026 05:00:54 +0000 (0:00:01.419) 0:03:50.638 ***** 2026-02-14 05:01:33.119027 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:33.119039 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:01:33.119049 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:01:33.119060 | orchestrator | 2026-02-14 05:01:33.119094 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-14 05:01:33.119105 | orchestrator | Saturday 14 February 2026 05:00:56 +0000 (0:00:01.641) 0:03:52.279 ***** 2026-02-14 05:01:33.119116 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:01:33.119128 | orchestrator | 2026-02-14 05:01:33.119139 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-14 05:01:33.119150 | orchestrator | Saturday 14 February 2026 05:00:58 +0000 (0:00:01.794) 0:03:54.073 ***** 2026-02-14 05:01:33.119160 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119171 | orchestrator | 2026-02-14 05:01:33.119182 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-14 05:01:33.119193 | orchestrator | Saturday 14 February 2026 05:01:00 +0000 (0:00:01.947) 0:03:56.021 ***** 2026-02-14 05:01:33.119204 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119215 | orchestrator | 2026-02-14 05:01:33.119225 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-14 05:01:33.119236 | orchestrator | Saturday 14 February 2026 05:01:01 +0000 (0:00:01.891) 0:03:57.913 ***** 2026-02-14 05:01:33.119247 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:33.119258 | orchestrator | 2026-02-14 05:01:33.119269 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-14 05:01:33.119279 | orchestrator | Saturday 14 February 2026 05:01:03 +0000 (0:00:01.182) 0:03:59.096 ***** 2026-02-14 05:01:33.119290 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119301 | orchestrator | 2026-02-14 05:01:33.119312 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-14 05:01:33.119322 | orchestrator | Saturday 14 February 2026 05:01:05 +0000 (0:00:02.275) 0:04:01.371 ***** 2026-02-14 05:01:33.119333 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119344 | orchestrator | 2026-02-14 05:01:33.119355 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-14 05:01:33.119366 | orchestrator | Saturday 14 February 2026 05:01:07 +0000 (0:00:02.203) 0:04:03.574 ***** 2026-02-14 05:01:33.119376 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119387 | orchestrator | 2026-02-14 05:01:33.119398 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-14 05:01:33.119409 | orchestrator | Saturday 14 February 2026 05:01:08 +0000 (0:00:01.171) 0:04:04.746 ***** 2026-02-14 05:01:33.119420 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119430 | orchestrator | 2026-02-14 05:01:33.119441 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-14 05:01:33.119452 | orchestrator | Saturday 14 February 2026 05:01:09 +0000 (0:00:01.169) 0:04:05.915 ***** 2026-02-14 05:01:33.119463 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-14 05:01:33.119474 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-14 05:01:33.119486 | orchestrator | } 2026-02-14 05:01:33.119497 | orchestrator | 2026-02-14 05:01:33.119513 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-14 05:01:33.119524 | orchestrator | Saturday 14 February 2026 05:01:11 +0000 (0:00:01.151) 0:04:07.067 ***** 2026-02-14 05:01:33.119534 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:33.119545 | orchestrator | 2026-02-14 05:01:33.119556 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-14 05:01:33.119567 | orchestrator | Saturday 14 February 2026 05:01:12 +0000 (0:00:01.155) 0:04:08.223 ***** 2026-02-14 05:01:33.119578 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-14 05:01:33.119589 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-14 05:01:33.119600 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-14 05:01:33.119611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-14 05:01:33.119621 | orchestrator | 2026-02-14 05:01:33.119632 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-14 05:01:33.119643 | orchestrator | Saturday 14 February 2026 05:01:17 +0000 (0:00:05.556) 0:04:13.779 ***** 2026-02-14 05:01:33.119654 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119664 | orchestrator | 2026-02-14 05:01:33.119675 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-14 05:01:33.119686 | orchestrator | Saturday 14 February 2026 05:01:20 +0000 (0:00:02.459) 0:04:16.238 ***** 2026-02-14 05:01:33.119697 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119708 | orchestrator | 2026-02-14 05:01:33.119718 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-14 05:01:33.119729 | orchestrator | Saturday 14 February 2026 05:01:22 +0000 (0:00:02.622) 0:04:18.861 ***** 2026-02-14 05:01:33.119740 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-14 05:01:33.119751 | orchestrator | 2026-02-14 05:01:33.119762 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-14 05:01:33.119773 | orchestrator | Saturday 14 February 2026 05:01:27 +0000 (0:00:04.503) 0:04:23.364 ***** 2026-02-14 05:01:33.119784 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:33.119794 | orchestrator | 2026-02-14 05:01:33.119805 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-14 05:01:33.119816 | orchestrator | Saturday 14 February 2026 05:01:28 +0000 (0:00:01.162) 0:04:24.527 ***** 2026-02-14 05:01:33.119827 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-14 05:01:33.119838 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-14 05:01:33.119849 | orchestrator | 2026-02-14 05:01:33.119860 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-14 05:01:33.119879 | orchestrator | Saturday 14 February 2026 05:01:31 +0000 (0:00:03.205) 0:04:27.732 ***** 2026-02-14 05:01:33.119891 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:33.119908 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:01:58.731310 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:01:58.731450 | orchestrator | 2026-02-14 05:01:58.731476 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-14 05:01:58.731496 | orchestrator | Saturday 14 February 2026 05:01:33 +0000 (0:00:01.380) 0:04:29.113 ***** 2026-02-14 05:01:58.731514 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:01:58.731533 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:01:58.731552 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:01:58.731570 | orchestrator | 2026-02-14 05:01:58.731588 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-14 05:01:58.731606 | orchestrator | 2026-02-14 05:01:58.731624 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-14 05:01:58.731643 | orchestrator | Saturday 14 February 2026 05:01:35 +0000 (0:00:02.167) 0:04:31.280 ***** 2026-02-14 05:01:58.731661 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:58.731712 | orchestrator | 2026-02-14 05:01:58.731732 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-14 05:01:58.731750 | orchestrator | Saturday 14 February 2026 05:01:36 +0000 (0:00:01.185) 0:04:32.466 ***** 2026-02-14 05:01:58.731769 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-14 05:01:58.731789 | orchestrator | 2026-02-14 05:01:58.731808 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-14 05:01:58.731824 | orchestrator | Saturday 14 February 2026 05:01:37 +0000 (0:00:01.453) 0:04:33.920 ***** 2026-02-14 05:01:58.731841 | orchestrator | ok: [testbed-manager] 2026-02-14 05:01:58.731858 | orchestrator | 2026-02-14 05:01:58.731876 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-14 05:01:58.731893 | orchestrator | 2026-02-14 05:01:58.731911 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-14 05:01:58.731951 | orchestrator | Saturday 14 February 2026 05:01:42 +0000 (0:00:04.637) 0:04:38.557 ***** 2026-02-14 05:01:58.732005 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:01:58.732026 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:01:58.732043 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:01:58.732062 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:01:58.732081 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:01:58.732099 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:01:58.732117 | orchestrator | 2026-02-14 05:01:58.732128 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-14 05:01:58.732139 | orchestrator | Saturday 14 February 2026 05:01:44 +0000 (0:00:02.008) 0:04:40.565 ***** 2026-02-14 05:01:58.732150 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 05:01:58.732161 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 05:01:58.732171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 05:01:58.732182 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-14 05:01:58.732193 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 05:01:58.732203 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-14 05:01:58.732214 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 05:01:58.732225 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 05:01:58.732236 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-14 05:01:58.732247 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 05:01:58.732258 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 05:01:58.732269 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-14 05:01:58.732279 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 05:01:58.732290 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 05:01:58.732301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 05:01:58.732311 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-14 05:01:58.732322 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 05:01:58.732333 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-14 05:01:58.732344 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 05:01:58.732354 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 05:01:58.732377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-14 05:01:58.732387 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 05:01:58.732398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 05:01:58.732409 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-14 05:01:58.732420 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 05:01:58.732431 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 05:01:58.732464 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-14 05:01:58.732476 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 05:01:58.732486 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 05:01:58.732497 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-14 05:01:58.732508 | orchestrator | 2026-02-14 05:01:58.732519 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-14 05:01:58.732529 | orchestrator | Saturday 14 February 2026 05:01:54 +0000 (0:00:09.577) 0:04:50.143 ***** 2026-02-14 05:01:58.732540 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:01:58.732551 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:01:58.732562 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:01:58.732573 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:58.732584 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:01:58.732595 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:01:58.732606 | orchestrator | 2026-02-14 05:01:58.732617 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-14 05:01:58.732628 | orchestrator | Saturday 14 February 2026 05:01:56 +0000 (0:00:01.959) 0:04:52.102 ***** 2026-02-14 05:01:58.732639 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:01:58.732650 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:01:58.732661 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:01:58.732671 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:01:58.732682 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:01:58.732693 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:01:58.732704 | orchestrator | 2026-02-14 05:01:58.732715 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:01:58.732732 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 05:01:58.732745 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 05:01:58.732756 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 05:01:58.732767 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-14 05:01:58.732778 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 05:01:58.732789 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 05:01:58.732799 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-14 05:01:58.732810 | orchestrator | 2026-02-14 05:01:58.732821 | orchestrator | 2026-02-14 05:01:58.732832 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:01:58.732849 | orchestrator | Saturday 14 February 2026 05:01:58 +0000 (0:00:02.608) 0:04:54.711 ***** 2026-02-14 05:01:58.732860 | orchestrator | =============================================================================== 2026-02-14 05:01:58.732871 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 39.61s 2026-02-14 05:01:58.732882 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.34s 2026-02-14 05:01:58.732894 | orchestrator | Manage labels ----------------------------------------------------------- 9.58s 2026-02-14 05:01:58.732905 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.85s 2026-02-14 05:01:58.732915 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.56s 2026-02-14 05:01:58.732926 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.64s 2026-02-14 05:01:58.732937 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.50s 2026-02-14 05:01:58.732948 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.43s 2026-02-14 05:01:58.732958 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.35s 2026-02-14 05:01:58.732992 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.21s 2026-02-14 05:01:58.733004 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 3.08s 2026-02-14 05:01:58.733015 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.01s 2026-02-14 05:01:58.733026 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.97s 2026-02-14 05:01:58.733037 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.68s 2026-02-14 05:01:58.733048 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.65s 2026-02-14 05:01:58.733058 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.62s 2026-02-14 05:01:58.733069 | orchestrator | Manage taints ----------------------------------------------------------- 2.61s 2026-02-14 05:01:58.733080 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.59s 2026-02-14 05:01:58.733099 | orchestrator | kubectl : Install required packages ------------------------------------- 2.47s 2026-02-14 05:01:59.198559 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.47s 2026-02-14 05:01:59.589306 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-14 05:01:59.589435 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-14 05:01:59.595579 | orchestrator | + set -e 2026-02-14 05:01:59.595688 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 05:01:59.595704 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 05:01:59.595717 | orchestrator | ++ INTERACTIVE=false 2026-02-14 05:01:59.595728 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 05:01:59.595738 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 05:01:59.595749 | orchestrator | + osism apply openstackclient 2026-02-14 05:02:11.706412 | orchestrator | 2026-02-14 05:02:11 | INFO  | Task ab608033-4d30-479c-b7a7-8851e3bde953 (openstackclient) was prepared for execution. 2026-02-14 05:02:11.706554 | orchestrator | 2026-02-14 05:02:11 | INFO  | It takes a moment until task ab608033-4d30-479c-b7a7-8851e3bde953 (openstackclient) has been started and output is visible here. 2026-02-14 05:02:46.290716 | orchestrator | 2026-02-14 05:02:46.290837 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-14 05:02:46.290855 | orchestrator | 2026-02-14 05:02:46.290867 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-14 05:02:46.290879 | orchestrator | Saturday 14 February 2026 05:02:17 +0000 (0:00:01.740) 0:00:01.740 ***** 2026-02-14 05:02:46.290891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-14 05:02:46.290930 | orchestrator | 2026-02-14 05:02:46.290995 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-14 05:02:46.291008 | orchestrator | Saturday 14 February 2026 05:02:19 +0000 (0:00:01.838) 0:00:03.579 ***** 2026-02-14 05:02:46.291019 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-14 05:02:46.291050 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-14 05:02:46.291061 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-14 05:02:46.291072 | orchestrator | 2026-02-14 05:02:46.291083 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-14 05:02:46.291094 | orchestrator | Saturday 14 February 2026 05:02:21 +0000 (0:00:02.303) 0:00:05.882 ***** 2026-02-14 05:02:46.291105 | orchestrator | changed: [testbed-manager] 2026-02-14 05:02:46.291116 | orchestrator | 2026-02-14 05:02:46.291126 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-14 05:02:46.291137 | orchestrator | Saturday 14 February 2026 05:02:24 +0000 (0:00:02.290) 0:00:08.172 ***** 2026-02-14 05:02:46.291148 | orchestrator | ok: [testbed-manager] 2026-02-14 05:02:46.291160 | orchestrator | 2026-02-14 05:02:46.291170 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-14 05:02:46.291181 | orchestrator | Saturday 14 February 2026 05:02:26 +0000 (0:00:02.030) 0:00:10.203 ***** 2026-02-14 05:02:46.291191 | orchestrator | ok: [testbed-manager] 2026-02-14 05:02:46.291202 | orchestrator | 2026-02-14 05:02:46.291213 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-14 05:02:46.291223 | orchestrator | Saturday 14 February 2026 05:02:28 +0000 (0:00:01.905) 0:00:12.109 ***** 2026-02-14 05:02:46.291234 | orchestrator | ok: [testbed-manager] 2026-02-14 05:02:46.291244 | orchestrator | 2026-02-14 05:02:46.291255 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-14 05:02:46.291269 | orchestrator | Saturday 14 February 2026 05:02:29 +0000 (0:00:01.487) 0:00:13.596 ***** 2026-02-14 05:02:46.291283 | orchestrator | changed: [testbed-manager] 2026-02-14 05:02:46.291296 | orchestrator | 2026-02-14 05:02:46.291308 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-14 05:02:46.291320 | orchestrator | Saturday 14 February 2026 05:02:40 +0000 (0:00:10.673) 0:00:24.270 ***** 2026-02-14 05:02:46.291333 | orchestrator | changed: [testbed-manager] 2026-02-14 05:02:46.291347 | orchestrator | 2026-02-14 05:02:46.291359 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-14 05:02:46.291371 | orchestrator | Saturday 14 February 2026 05:02:42 +0000 (0:00:02.015) 0:00:26.285 ***** 2026-02-14 05:02:46.291384 | orchestrator | changed: [testbed-manager] 2026-02-14 05:02:46.291396 | orchestrator | 2026-02-14 05:02:46.291409 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-14 05:02:46.291422 | orchestrator | Saturday 14 February 2026 05:02:44 +0000 (0:00:01.598) 0:00:27.884 ***** 2026-02-14 05:02:46.291434 | orchestrator | ok: [testbed-manager] 2026-02-14 05:02:46.291446 | orchestrator | 2026-02-14 05:02:46.291458 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:02:46.291471 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-14 05:02:46.291485 | orchestrator | 2026-02-14 05:02:46.291497 | orchestrator | 2026-02-14 05:02:46.291509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:02:46.291521 | orchestrator | Saturday 14 February 2026 05:02:45 +0000 (0:00:01.956) 0:00:29.840 ***** 2026-02-14 05:02:46.291534 | orchestrator | =============================================================================== 2026-02-14 05:02:46.291546 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.67s 2026-02-14 05:02:46.291558 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.30s 2026-02-14 05:02:46.291571 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.29s 2026-02-14 05:02:46.291591 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.03s 2026-02-14 05:02:46.291604 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.02s 2026-02-14 05:02:46.291617 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.96s 2026-02-14 05:02:46.291628 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.91s 2026-02-14 05:02:46.291638 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.84s 2026-02-14 05:02:46.291649 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.60s 2026-02-14 05:02:46.291660 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.49s 2026-02-14 05:02:46.613800 | orchestrator | + osism apply -a upgrade common 2026-02-14 05:02:48.660085 | orchestrator | 2026-02-14 05:02:48 | INFO  | Task 8ee2bfcc-f6bb-420a-a596-df39e5932592 (common) was prepared for execution. 2026-02-14 05:02:48.660182 | orchestrator | 2026-02-14 05:02:48 | INFO  | It takes a moment until task 8ee2bfcc-f6bb-420a-a596-df39e5932592 (common) has been started and output is visible here. 2026-02-14 05:03:05.239545 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-14 05:03:05.239650 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-14 05:03:05.239674 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-14 05:03:05.239683 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-14 05:03:05.239700 | orchestrator | 2026-02-14 05:03:05.239709 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-14 05:03:05.239717 | orchestrator | 2026-02-14 05:03:05.239742 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 05:03:05.239751 | orchestrator | Saturday 14 February 2026 05:02:55 +0000 (0:00:02.529) 0:00:02.529 ***** 2026-02-14 05:03:05.239760 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:03:05.239769 | orchestrator | 2026-02-14 05:03:05.239777 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-14 05:03:05.239785 | orchestrator | Saturday 14 February 2026 05:02:57 +0000 (0:00:02.160) 0:00:04.690 ***** 2026-02-14 05:03:05.239793 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239801 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239808 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239820 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239835 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239848 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239857 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239865 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239873 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.239881 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239888 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:03:05.239896 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239904 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.239956 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239966 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.239974 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239982 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:03:05.239989 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.239997 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.240005 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.240013 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:03:05.240026 | orchestrator | 2026-02-14 05:03:05.240040 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 05:03:05.240054 | orchestrator | Saturday 14 February 2026 05:03:00 +0000 (0:00:03.070) 0:00:07.761 ***** 2026-02-14 05:03:05.240065 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:03:05.240076 | orchestrator | 2026-02-14 05:03:05.240085 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-14 05:03:05.240095 | orchestrator | Saturday 14 February 2026 05:03:02 +0000 (0:00:02.051) 0:00:09.812 ***** 2026-02-14 05:03:05.240109 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240189 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240201 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240209 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240218 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240234 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240242 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:05.240428 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:05.240452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.781833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.781990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782090 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782120 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782167 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782190 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782204 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782236 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782249 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782260 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782279 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782291 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:06.782302 | orchestrator | 2026-02-14 05:03:06.782315 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-14 05:03:06.782331 | orchestrator | Saturday 14 February 2026 05:03:06 +0000 (0:00:03.353) 0:00:13.166 ***** 2026-02-14 05:03:06.782345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:06.782358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:06.782373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:06.782399 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:07.866355 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:07.866369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:07.866436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866477 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:07.866537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:07.866594 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:07.866613 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:07.866632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:07.866679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866699 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:07.866718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:07.866759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:07.866792 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:07.866828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.361325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362121 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:10.362144 | orchestrator | 2026-02-14 05:03:10.362151 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-14 05:03:10.362159 | orchestrator | Saturday 14 February 2026 05:03:07 +0000 (0:00:01.742) 0:00:14.908 ***** 2026-02-14 05:03:10.362167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:10.362188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:10.362195 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362231 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362237 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:10.362262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:10.362269 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:10.362275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:10.362282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:10.362301 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:10.362307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:10.362323 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:10.362340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:17.994460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994480 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:17.994513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:17.994538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994573 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:17.994585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:17.994607 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:17.994618 | orchestrator | 2026-02-14 05:03:17.994630 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-14 05:03:17.994642 | orchestrator | Saturday 14 February 2026 05:03:10 +0000 (0:00:02.500) 0:00:17.409 ***** 2026-02-14 05:03:17.994653 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:17.994664 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:17.994674 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:17.994685 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:17.994713 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:17.994726 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:17.994737 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:17.994748 | orchestrator | 2026-02-14 05:03:17.994759 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-14 05:03:17.994770 | orchestrator | Saturday 14 February 2026 05:03:11 +0000 (0:00:01.004) 0:00:18.413 ***** 2026-02-14 05:03:17.994780 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:17.994791 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:17.994801 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:17.994812 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:17.994823 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:17.994835 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:17.994848 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:17.994861 | orchestrator | 2026-02-14 05:03:17.994874 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-14 05:03:17.994886 | orchestrator | Saturday 14 February 2026 05:03:12 +0000 (0:00:00.979) 0:00:19.393 ***** 2026-02-14 05:03:17.994898 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:17.994911 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:17.994952 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:17.994968 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:17.994980 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:17.994992 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:17.995004 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:17.995017 | orchestrator | 2026-02-14 05:03:17.995030 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-14 05:03:17.995048 | orchestrator | Saturday 14 February 2026 05:03:13 +0000 (0:00:00.787) 0:00:20.181 ***** 2026-02-14 05:03:17.995061 | orchestrator | changed: [testbed-manager] 2026-02-14 05:03:17.995081 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:03:17.995094 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:03:17.995108 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:03:17.995126 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:03:17.995145 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:03:17.995163 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:03:17.995206 | orchestrator | 2026-02-14 05:03:17.995223 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-14 05:03:17.995240 | orchestrator | Saturday 14 February 2026 05:03:15 +0000 (0:00:01.928) 0:00:22.110 ***** 2026-02-14 05:03:17.995259 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:17.995277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:17.995295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:17.995312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:17.995344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:19.195484 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:19.195635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:19.195656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195706 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:19.195813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:32.721708 | orchestrator | 2026-02-14 05:03:32.721819 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-14 05:03:32.721837 | orchestrator | Saturday 14 February 2026 05:03:19 +0000 (0:00:04.130) 0:00:26.240 ***** 2026-02-14 05:03:32.721849 | orchestrator | [WARNING]: Skipped 2026-02-14 05:03:32.721862 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-14 05:03:32.721873 | orchestrator | to this access issue: 2026-02-14 05:03:32.721884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-14 05:03:32.721895 | orchestrator | directory 2026-02-14 05:03:32.721906 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:03:32.721980 | orchestrator | 2026-02-14 05:03:32.722000 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-14 05:03:32.722015 | orchestrator | Saturday 14 February 2026 05:03:20 +0000 (0:00:01.291) 0:00:27.532 ***** 2026-02-14 05:03:32.722102 | orchestrator | [WARNING]: Skipped 2026-02-14 05:03:32.722114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-14 05:03:32.722125 | orchestrator | to this access issue: 2026-02-14 05:03:32.722136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-14 05:03:32.722147 | orchestrator | directory 2026-02-14 05:03:32.722158 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:03:32.722201 | orchestrator | 2026-02-14 05:03:32.722212 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-14 05:03:32.722223 | orchestrator | Saturday 14 February 2026 05:03:21 +0000 (0:00:00.921) 0:00:28.454 ***** 2026-02-14 05:03:32.722234 | orchestrator | [WARNING]: Skipped 2026-02-14 05:03:32.722245 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-14 05:03:32.722258 | orchestrator | to this access issue: 2026-02-14 05:03:32.722271 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-14 05:03:32.722297 | orchestrator | directory 2026-02-14 05:03:32.722310 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:03:32.722322 | orchestrator | 2026-02-14 05:03:32.722335 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-14 05:03:32.722348 | orchestrator | Saturday 14 February 2026 05:03:22 +0000 (0:00:01.050) 0:00:29.504 ***** 2026-02-14 05:03:32.722361 | orchestrator | [WARNING]: Skipped 2026-02-14 05:03:32.722373 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-14 05:03:32.722386 | orchestrator | to this access issue: 2026-02-14 05:03:32.722398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-14 05:03:32.722411 | orchestrator | directory 2026-02-14 05:03:32.722424 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:03:32.722436 | orchestrator | 2026-02-14 05:03:32.722450 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-14 05:03:32.722463 | orchestrator | Saturday 14 February 2026 05:03:23 +0000 (0:00:00.924) 0:00:30.429 ***** 2026-02-14 05:03:32.722475 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:03:32.722487 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:03:32.722499 | orchestrator | changed: [testbed-manager] 2026-02-14 05:03:32.722511 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:03:32.722523 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:03:32.722536 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:03:32.722548 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:03:32.722560 | orchestrator | 2026-02-14 05:03:32.722572 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-14 05:03:32.722585 | orchestrator | Saturday 14 February 2026 05:03:26 +0000 (0:00:03.118) 0:00:33.548 ***** 2026-02-14 05:03:32.722616 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722631 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722644 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722655 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722666 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722676 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722687 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:03:32.722698 | orchestrator | 2026-02-14 05:03:32.722708 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-14 05:03:32.722719 | orchestrator | Saturday 14 February 2026 05:03:28 +0000 (0:00:02.318) 0:00:35.866 ***** 2026-02-14 05:03:32.722730 | orchestrator | ok: [testbed-manager] 2026-02-14 05:03:32.722741 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:03:32.722751 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:03:32.722762 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:03:32.722773 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:03:32.722783 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:03:32.722794 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:03:32.722804 | orchestrator | 2026-02-14 05:03:32.722815 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-14 05:03:32.722826 | orchestrator | Saturday 14 February 2026 05:03:30 +0000 (0:00:01.967) 0:00:37.834 ***** 2026-02-14 05:03:32.722858 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:32.722880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:32.722894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:32.722905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:32.722975 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:32.722991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:32.723003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:32.723015 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:32.723034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:39.778878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:39.779036 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:39.779074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:39.779088 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:39.779102 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:39.779114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:39.779125 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:39.779156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:39.779168 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:39.779180 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:39.779199 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:39.779211 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:39.779222 | orchestrator | 2026-02-14 05:03:39.779235 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-14 05:03:39.779247 | orchestrator | Saturday 14 February 2026 05:03:32 +0000 (0:00:01.928) 0:00:39.763 ***** 2026-02-14 05:03:39.779258 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779270 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779280 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779291 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779302 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779312 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779323 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:03:39.779334 | orchestrator | 2026-02-14 05:03:39.779345 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-14 05:03:39.779355 | orchestrator | Saturday 14 February 2026 05:03:34 +0000 (0:00:02.193) 0:00:41.957 ***** 2026-02-14 05:03:39.779366 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779377 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779388 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779401 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779414 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779428 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779441 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:03:39.779453 | orchestrator | 2026-02-14 05:03:39.779466 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-14 05:03:39.779479 | orchestrator | Saturday 14 February 2026 05:03:37 +0000 (0:00:02.341) 0:00:44.298 ***** 2026-02-14 05:03:39.779512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492020 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:03:40.492169 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492310 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:40.492398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:03:42.086496 | orchestrator | 2026-02-14 05:03:42.086509 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-14 05:03:42.086521 | orchestrator | Saturday 14 February 2026 05:03:40 +0000 (0:00:03.245) 0:00:47.544 ***** 2026-02-14 05:03:42.086533 | orchestrator | changed: [testbed-manager] => { 2026-02-14 05:03:42.086544 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086555 | orchestrator | } 2026-02-14 05:03:42.086566 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:03:42.086577 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086587 | orchestrator | } 2026-02-14 05:03:42.086598 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:03:42.086608 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086619 | orchestrator | } 2026-02-14 05:03:42.086629 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:03:42.086662 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086674 | orchestrator | } 2026-02-14 05:03:42.086684 | orchestrator | changed: [testbed-node-3] => { 2026-02-14 05:03:42.086695 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086705 | orchestrator | } 2026-02-14 05:03:42.086716 | orchestrator | changed: [testbed-node-4] => { 2026-02-14 05:03:42.086727 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086737 | orchestrator | } 2026-02-14 05:03:42.086748 | orchestrator | changed: [testbed-node-5] => { 2026-02-14 05:03:42.086758 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:03:42.086769 | orchestrator | } 2026-02-14 05:03:42.086780 | orchestrator | 2026-02-14 05:03:42.086791 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:03:42.086801 | orchestrator | Saturday 14 February 2026 05:03:41 +0000 (0:00:01.011) 0:00:48.556 ***** 2026-02-14 05:03:42.086828 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:42.086861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.086875 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.086887 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:03:42.086899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:42.086942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.086964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.086990 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:03:42.087002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:42.087014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.087025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:42.087036 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:03:42.087057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:46.535718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.535826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.535846 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:03:46.535861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:46.535894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.535969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.535984 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-14 05:03:46.535996 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-14 05:03:46.536019 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:03:46.536035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:46.536067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.536080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.536091 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:03:46.536102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:03:46.536122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.536134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:03:46.536145 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:03:46.536156 | orchestrator | 2026-02-14 05:03:46.536168 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536179 | orchestrator | Saturday 14 February 2026 05:03:43 +0000 (0:00:02.136) 0:00:50.692 ***** 2026-02-14 05:03:46.536190 | orchestrator | 2026-02-14 05:03:46.536202 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536213 | orchestrator | Saturday 14 February 2026 05:03:43 +0000 (0:00:00.098) 0:00:50.791 ***** 2026-02-14 05:03:46.536224 | orchestrator | 2026-02-14 05:03:46.536237 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536250 | orchestrator | Saturday 14 February 2026 05:03:43 +0000 (0:00:00.073) 0:00:50.865 ***** 2026-02-14 05:03:46.536263 | orchestrator | 2026-02-14 05:03:46.536275 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536288 | orchestrator | Saturday 14 February 2026 05:03:43 +0000 (0:00:00.088) 0:00:50.953 ***** 2026-02-14 05:03:46.536300 | orchestrator | 2026-02-14 05:03:46.536312 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536325 | orchestrator | Saturday 14 February 2026 05:03:43 +0000 (0:00:00.076) 0:00:51.029 ***** 2026-02-14 05:03:46.536337 | orchestrator | 2026-02-14 05:03:46.536350 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536363 | orchestrator | Saturday 14 February 2026 05:03:44 +0000 (0:00:00.323) 0:00:51.353 ***** 2026-02-14 05:03:46.536376 | orchestrator | 2026-02-14 05:03:46.536393 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:03:46.536406 | orchestrator | Saturday 14 February 2026 05:03:44 +0000 (0:00:00.074) 0:00:51.427 ***** 2026-02-14 05:03:46.536419 | orchestrator | 2026-02-14 05:03:46.536430 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-14 05:03:46.536441 | orchestrator | Saturday 14 February 2026 05:03:44 +0000 (0:00:00.106) 0:00:51.534 ***** 2026-02-14 05:03:46.536451 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-14 05:03:46.536462 | orchestrator | (): '7fd2eba6-969c-f0ba-5dcc-00000000000f' 2026-02-14 05:03:46.536507 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_q9e5azny/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_q9e5azny/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_q9e5azny/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:48.125615 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_h5pf9ldn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_h5pf9ldn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_h5pf9ldn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:48.125760 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f2oglzu8/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f2oglzu8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f2oglzu8/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:48.125788 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_89r0o6mk/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_89r0o6mk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_89r0o6mk/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:48.125819 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_srldndmy/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_srldndmy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_srldndmy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:48.622841 | orchestrator | 2026-02-14 05:03:48 | INFO  | Task 0e4c20d6-2319-44d7-a703-46346d01816a (common) was prepared for execution. 2026-02-14 05:03:48.622991 | orchestrator | 2026-02-14 05:03:48 | INFO  | It takes a moment until task 0e4c20d6-2319-44d7-a703-46346d01816a (common) has been started and output is visible here. 2026-02-14 05:03:58.426717 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_lsncc98w/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_lsncc98w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_lsncc98w/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:58.427008 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_noacmy6s/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_noacmy6s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_noacmy6s/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-14 05:03:58.427042 | orchestrator | 2026-02-14 05:03:58.427057 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:03:58.427071 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427102 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427114 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427125 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427147 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427158 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427169 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-14 05:03:58.427180 | orchestrator | 2026-02-14 05:03:58.427191 | orchestrator | 2026-02-14 05:03:58.427202 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:03:58.427216 | orchestrator | Saturday 14 February 2026 05:03:48 +0000 (0:00:03.649) 0:00:55.184 ***** 2026-02-14 05:03:58.427230 | orchestrator | =============================================================================== 2026-02-14 05:03:58.427243 | orchestrator | common : Copying over config.json files for services -------------------- 4.13s 2026-02-14 05:03:58.427255 | orchestrator | common : Restart fluentd container -------------------------------------- 3.65s 2026-02-14 05:03:58.427268 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.35s 2026-02-14 05:03:58.427280 | orchestrator | service-check-containers : common | Check containers -------------------- 3.25s 2026-02-14 05:03:58.427293 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.12s 2026-02-14 05:03:58.427306 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.07s 2026-02-14 05:03:58.427318 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.50s 2026-02-14 05:03:58.427331 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.34s 2026-02-14 05:03:58.427344 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.32s 2026-02-14 05:03:58.427356 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.19s 2026-02-14 05:03:58.427369 | orchestrator | common : include_tasks -------------------------------------------------- 2.16s 2026-02-14 05:03:58.427381 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.14s 2026-02-14 05:03:58.427394 | orchestrator | common : include_tasks -------------------------------------------------- 2.05s 2026-02-14 05:03:58.427407 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.97s 2026-02-14 05:03:58.427419 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.93s 2026-02-14 05:03:58.427432 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.93s 2026-02-14 05:03:58.427445 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.74s 2026-02-14 05:03:58.427459 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.29s 2026-02-14 05:03:58.427472 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.05s 2026-02-14 05:03:58.427485 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 1.01s 2026-02-14 05:03:58.427498 | orchestrator | 2026-02-14 05:03:58.427511 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-14 05:03:58.427525 | orchestrator | 2026-02-14 05:03:58.427537 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 05:03:58.427563 | orchestrator | Saturday 14 February 2026 05:03:55 +0000 (0:00:02.041) 0:00:02.041 ***** 2026-02-14 05:03:58.427583 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:03:58.427602 | orchestrator | 2026-02-14 05:03:58.427633 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-14 05:04:07.247286 | orchestrator | Saturday 14 February 2026 05:03:58 +0000 (0:00:03.251) 0:00:05.293 ***** 2026-02-14 05:04:07.247397 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247413 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247425 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247436 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247447 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247458 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247469 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247480 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-14 05:04:07.247491 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247503 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247515 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247525 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247536 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247547 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247558 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-14 05:04:07.247568 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247579 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247590 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247601 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247611 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247622 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-14 05:04:07.247633 | orchestrator | 2026-02-14 05:04:07.247645 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-14 05:04:07.247656 | orchestrator | Saturday 14 February 2026 05:04:01 +0000 (0:00:03.208) 0:00:08.502 ***** 2026-02-14 05:04:07.247667 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:04:07.247680 | orchestrator | 2026-02-14 05:04:07.247691 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-14 05:04:07.247702 | orchestrator | Saturday 14 February 2026 05:04:04 +0000 (0:00:02.956) 0:00:11.459 ***** 2026-02-14 05:04:07.247716 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247751 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247800 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247816 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247830 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247844 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:07.247858 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:07.247879 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:07.247893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:07.247951 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.712839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.712967 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.712977 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.712983 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713012 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713027 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713043 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713048 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:09.713053 | orchestrator | 2026-02-14 05:04:09.713058 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-14 05:04:09.713063 | orchestrator | Saturday 14 February 2026 05:04:08 +0000 (0:00:04.367) 0:00:15.826 ***** 2026-02-14 05:04:09.713069 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:09.713075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:09.713086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:09.713091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:09.713098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:09.713107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:11.824059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824194 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:04:11.824213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:11.824274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824286 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:04:11.824297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:11.824354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824367 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:04:11.824378 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:04:11.824389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:11.824466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824493 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:04:11.824507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824519 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:04:11.824533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:11.824553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:11.824576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935517 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:04:14.935687 | orchestrator | 2026-02-14 05:04:14.935703 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-14 05:04:14.935715 | orchestrator | Saturday 14 February 2026 05:04:11 +0000 (0:00:02.853) 0:00:18.679 ***** 2026-02-14 05:04:14.935728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:14.935763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:14.935775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:14.935806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:14.935874 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:04:14.935885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.935939 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:04:14.935949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:14.935959 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:04:14.935969 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:04:14.935994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:14.936014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:26.991193 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:04:26.991342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:26.991363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:26.991375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:04:26.991387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:26.991401 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:04:26.991412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:26.991441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:26.991453 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:04:26.991465 | orchestrator | 2026-02-14 05:04:26.991496 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-14 05:04:26.991509 | orchestrator | Saturday 14 February 2026 05:04:14 +0000 (0:00:03.118) 0:00:21.798 ***** 2026-02-14 05:04:26.991521 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:04:26.991532 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:04:26.991543 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:04:26.991554 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:04:26.991565 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:04:26.991575 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:04:26.991586 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:04:26.991597 | orchestrator | 2026-02-14 05:04:26.991608 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-14 05:04:26.991619 | orchestrator | Saturday 14 February 2026 05:04:17 +0000 (0:00:02.131) 0:00:23.930 ***** 2026-02-14 05:04:26.991630 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:04:26.991641 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:04:26.991655 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:04:26.991695 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:04:26.991714 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:04:26.991732 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:04:26.991751 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:04:26.991770 | orchestrator | 2026-02-14 05:04:26.991789 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-14 05:04:26.991809 | orchestrator | Saturday 14 February 2026 05:04:19 +0000 (0:00:02.102) 0:00:26.032 ***** 2026-02-14 05:04:26.991828 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:04:26.991844 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:04:26.991857 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:04:26.991870 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:04:26.991882 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:04:26.991919 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:04:26.991930 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:04:26.991941 | orchestrator | 2026-02-14 05:04:26.991951 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-14 05:04:26.991962 | orchestrator | Saturday 14 February 2026 05:04:21 +0000 (0:00:01.958) 0:00:27.991 ***** 2026-02-14 05:04:26.991973 | orchestrator | ok: [testbed-manager] 2026-02-14 05:04:26.992022 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:04:26.992086 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:04:26.992098 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:04:26.992108 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:04:26.992119 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:04:26.992130 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:04:26.992141 | orchestrator | 2026-02-14 05:04:26.992152 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-14 05:04:26.992163 | orchestrator | Saturday 14 February 2026 05:04:24 +0000 (0:00:02.891) 0:00:30.882 ***** 2026-02-14 05:04:26.992174 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:26.992188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:26.992213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:26.992232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:26.992244 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:26.992268 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816203 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:29.816311 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:29.816328 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816364 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816405 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816419 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816450 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816463 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816475 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816493 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816517 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816528 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816540 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:29.816551 | orchestrator | 2026-02-14 05:04:29.816563 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-14 05:04:29.816577 | orchestrator | Saturday 14 February 2026 05:04:28 +0000 (0:00:04.881) 0:00:35.764 ***** 2026-02-14 05:04:29.816588 | orchestrator | [WARNING]: Skipped 2026-02-14 05:04:29.816606 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-14 05:04:49.857221 | orchestrator | to this access issue: 2026-02-14 05:04:49.857336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-14 05:04:49.857352 | orchestrator | directory 2026-02-14 05:04:49.857365 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:04:49.857376 | orchestrator | 2026-02-14 05:04:49.857388 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-14 05:04:49.857400 | orchestrator | Saturday 14 February 2026 05:04:31 +0000 (0:00:02.329) 0:00:38.094 ***** 2026-02-14 05:04:49.857411 | orchestrator | [WARNING]: Skipped 2026-02-14 05:04:49.857422 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-14 05:04:49.857433 | orchestrator | to this access issue: 2026-02-14 05:04:49.857455 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-14 05:04:49.857467 | orchestrator | directory 2026-02-14 05:04:49.857478 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:04:49.857489 | orchestrator | 2026-02-14 05:04:49.857528 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-14 05:04:49.857549 | orchestrator | Saturday 14 February 2026 05:04:33 +0000 (0:00:01.872) 0:00:39.966 ***** 2026-02-14 05:04:49.857568 | orchestrator | [WARNING]: Skipped 2026-02-14 05:04:49.857586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-14 05:04:49.857605 | orchestrator | to this access issue: 2026-02-14 05:04:49.857624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-14 05:04:49.857642 | orchestrator | directory 2026-02-14 05:04:49.857660 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:04:49.857681 | orchestrator | 2026-02-14 05:04:49.857701 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-14 05:04:49.857723 | orchestrator | Saturday 14 February 2026 05:04:34 +0000 (0:00:01.840) 0:00:41.807 ***** 2026-02-14 05:04:49.857744 | orchestrator | [WARNING]: Skipped 2026-02-14 05:04:49.857765 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-14 05:04:49.857785 | orchestrator | to this access issue: 2026-02-14 05:04:49.857805 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-14 05:04:49.857824 | orchestrator | directory 2026-02-14 05:04:49.857843 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-14 05:04:49.857864 | orchestrator | 2026-02-14 05:04:49.857908 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-14 05:04:49.857927 | orchestrator | Saturday 14 February 2026 05:04:36 +0000 (0:00:01.949) 0:00:43.757 ***** 2026-02-14 05:04:49.857945 | orchestrator | ok: [testbed-manager] 2026-02-14 05:04:49.857962 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:04:49.857979 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:04:49.857996 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:04:49.858015 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:04:49.858153 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:04:49.858173 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:04:49.858192 | orchestrator | 2026-02-14 05:04:49.858211 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-14 05:04:49.858232 | orchestrator | Saturday 14 February 2026 05:04:41 +0000 (0:00:04.509) 0:00:48.266 ***** 2026-02-14 05:04:49.858251 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858271 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858292 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858312 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858332 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858365 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858388 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-14 05:04:49.858409 | orchestrator | 2026-02-14 05:04:49.858429 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-14 05:04:49.858446 | orchestrator | Saturday 14 February 2026 05:04:44 +0000 (0:00:03.034) 0:00:51.301 ***** 2026-02-14 05:04:49.858466 | orchestrator | ok: [testbed-manager] 2026-02-14 05:04:49.858486 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:04:49.858505 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:04:49.858526 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:04:49.858546 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:04:49.858566 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:04:49.858584 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:04:49.858603 | orchestrator | 2026-02-14 05:04:49.858621 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-14 05:04:49.858658 | orchestrator | Saturday 14 February 2026 05:04:48 +0000 (0:00:03.652) 0:00:54.953 ***** 2026-02-14 05:04:49.858682 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:49.858736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:49.858762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:49.858784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:49.858806 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:49.858830 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:49.858862 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:49.859164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:49.859215 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:59.722692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:59.722800 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:59.722817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:59.722829 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:59.722857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:59.722945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:59.722962 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:59.722993 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:04:59.723007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:04:59.723018 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:59.723030 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:59.723041 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:04:59.723053 | orchestrator | 2026-02-14 05:04:59.723066 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-14 05:04:59.723079 | orchestrator | Saturday 14 February 2026 05:04:50 +0000 (0:00:02.829) 0:00:57.783 ***** 2026-02-14 05:04:59.723090 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723116 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723127 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723138 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723148 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723159 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723169 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-14 05:04:59.723180 | orchestrator | 2026-02-14 05:04:59.723191 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-14 05:04:59.723202 | orchestrator | Saturday 14 February 2026 05:04:53 +0000 (0:00:02.929) 0:01:00.712 ***** 2026-02-14 05:04:59.723213 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723224 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723236 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723250 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723263 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723275 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723288 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-14 05:04:59.723301 | orchestrator | 2026-02-14 05:04:59.723313 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-14 05:04:59.723325 | orchestrator | Saturday 14 February 2026 05:04:57 +0000 (0:00:03.401) 0:01:04.114 ***** 2026-02-14 05:04:59.723398 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-14 05:05:01.632604 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632706 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:01.632739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:05:04.434833 | orchestrator | 2026-02-14 05:05:04.434846 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-14 05:05:04.434859 | orchestrator | Saturday 14 February 2026 05:05:01 +0000 (0:00:04.378) 0:01:08.493 ***** 2026-02-14 05:05:04.434924 | orchestrator | changed: [testbed-manager] => { 2026-02-14 05:05:04.434938 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.434951 | orchestrator | } 2026-02-14 05:05:04.434971 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:05:04.434989 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435007 | orchestrator | } 2026-02-14 05:05:04.435024 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:05:04.435041 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435058 | orchestrator | } 2026-02-14 05:05:04.435076 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:05:04.435094 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435110 | orchestrator | } 2026-02-14 05:05:04.435126 | orchestrator | changed: [testbed-node-3] => { 2026-02-14 05:05:04.435143 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435161 | orchestrator | } 2026-02-14 05:05:04.435179 | orchestrator | changed: [testbed-node-4] => { 2026-02-14 05:05:04.435197 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435216 | orchestrator | } 2026-02-14 05:05:04.435234 | orchestrator | changed: [testbed-node-5] => { 2026-02-14 05:05:04.435252 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:05:04.435271 | orchestrator | } 2026-02-14 05:05:04.435290 | orchestrator | 2026-02-14 05:05:04.435309 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:05:04.435327 | orchestrator | Saturday 14 February 2026 05:05:03 +0000 (0:00:02.112) 0:01:10.605 ***** 2026-02-14 05:05:04.435349 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:04.435420 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435443 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435463 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:05:04.435482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:04.435501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435538 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:05:04.435570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:04.435590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:04.435648 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:05:45.180303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:45.180420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180468 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:05:45.180482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:45.180494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180536 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:05:45.180548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:45.180578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180601 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:05:45.180613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-14 05:05:45.180629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:05:45.180652 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:05:45.180663 | orchestrator | 2026-02-14 05:05:45.180675 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180688 | orchestrator | Saturday 14 February 2026 05:05:06 +0000 (0:00:02.958) 0:01:13.564 ***** 2026-02-14 05:05:45.180699 | orchestrator | 2026-02-14 05:05:45.180710 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180728 | orchestrator | Saturday 14 February 2026 05:05:07 +0000 (0:00:00.451) 0:01:14.015 ***** 2026-02-14 05:05:45.180739 | orchestrator | 2026-02-14 05:05:45.180750 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180761 | orchestrator | Saturday 14 February 2026 05:05:07 +0000 (0:00:00.437) 0:01:14.453 ***** 2026-02-14 05:05:45.180772 | orchestrator | 2026-02-14 05:05:45.180783 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180793 | orchestrator | Saturday 14 February 2026 05:05:08 +0000 (0:00:00.440) 0:01:14.894 ***** 2026-02-14 05:05:45.180804 | orchestrator | 2026-02-14 05:05:45.180815 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180826 | orchestrator | Saturday 14 February 2026 05:05:08 +0000 (0:00:00.441) 0:01:15.336 ***** 2026-02-14 05:05:45.180839 | orchestrator | 2026-02-14 05:05:45.180880 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180894 | orchestrator | Saturday 14 February 2026 05:05:09 +0000 (0:00:00.719) 0:01:16.055 ***** 2026-02-14 05:05:45.180907 | orchestrator | 2026-02-14 05:05:45.180919 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-14 05:05:45.180932 | orchestrator | Saturday 14 February 2026 05:05:09 +0000 (0:00:00.462) 0:01:16.518 ***** 2026-02-14 05:05:45.180944 | orchestrator | 2026-02-14 05:05:45.180957 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-14 05:05:45.180969 | orchestrator | Saturday 14 February 2026 05:05:10 +0000 (0:00:00.870) 0:01:17.388 ***** 2026-02-14 05:05:45.180982 | orchestrator | changed: [testbed-manager] 2026-02-14 05:05:45.180994 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:05:45.181007 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:05:45.181020 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:05:45.181034 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:05:45.181047 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:05:45.181066 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:06:36.899107 | orchestrator | 2026-02-14 05:06:36.899224 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-14 05:06:36.899241 | orchestrator | Saturday 14 February 2026 05:05:45 +0000 (0:00:34.648) 0:01:52.037 ***** 2026-02-14 05:06:36.899254 | orchestrator | changed: [testbed-manager] 2026-02-14 05:06:36.899267 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:06:36.899278 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:06:36.899289 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:06:36.899300 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:06:36.899311 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:06:36.899321 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:06:36.899332 | orchestrator | 2026-02-14 05:06:36.899344 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-14 05:06:36.899355 | orchestrator | Saturday 14 February 2026 05:06:20 +0000 (0:00:35.433) 0:02:27.470 ***** 2026-02-14 05:06:36.899365 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:06:36.899378 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:06:36.899388 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:06:36.899399 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:06:36.899410 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:06:36.899420 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:06:36.899431 | orchestrator | ok: [testbed-manager] 2026-02-14 05:06:36.899442 | orchestrator | 2026-02-14 05:06:36.899453 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-14 05:06:36.899464 | orchestrator | Saturday 14 February 2026 05:06:23 +0000 (0:00:03.300) 0:02:30.771 ***** 2026-02-14 05:06:36.899475 | orchestrator | changed: [testbed-manager] 2026-02-14 05:06:36.899486 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:06:36.899496 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:06:36.899507 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:06:36.899518 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:06:36.899553 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:06:36.899564 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:06:36.899575 | orchestrator | 2026-02-14 05:06:36.899585 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:06:36.899597 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899624 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899636 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899650 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899662 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899675 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899687 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:06:36.899700 | orchestrator | 2026-02-14 05:06:36.899713 | orchestrator | 2026-02-14 05:06:36.899725 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:06:36.899739 | orchestrator | Saturday 14 February 2026 05:06:36 +0000 (0:00:12.179) 0:02:42.951 ***** 2026-02-14 05:06:36.899752 | orchestrator | =============================================================================== 2026-02-14 05:06:36.899768 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.43s 2026-02-14 05:06:36.899786 | orchestrator | common : Restart fluentd container ------------------------------------- 34.65s 2026-02-14 05:06:36.899801 | orchestrator | common : Restart cron container ---------------------------------------- 12.18s 2026-02-14 05:06:36.899813 | orchestrator | common : Copying over config.json files for services -------------------- 4.88s 2026-02-14 05:06:36.899825 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.51s 2026-02-14 05:06:36.899861 | orchestrator | service-check-containers : common | Check containers -------------------- 4.38s 2026-02-14 05:06:36.899873 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.37s 2026-02-14 05:06:36.899886 | orchestrator | common : Flush handlers ------------------------------------------------- 3.82s 2026-02-14 05:06:36.899898 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.65s 2026-02-14 05:06:36.899910 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.40s 2026-02-14 05:06:36.899923 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.30s 2026-02-14 05:06:36.899935 | orchestrator | common : include_tasks -------------------------------------------------- 3.25s 2026-02-14 05:06:36.899948 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.21s 2026-02-14 05:06:36.899960 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.12s 2026-02-14 05:06:36.899973 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.03s 2026-02-14 05:06:36.899985 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.96s 2026-02-14 05:06:36.899998 | orchestrator | common : include_tasks -------------------------------------------------- 2.96s 2026-02-14 05:06:36.900027 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.93s 2026-02-14 05:06:36.900039 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.89s 2026-02-14 05:06:36.900059 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.85s 2026-02-14 05:06:37.304752 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-14 05:06:39.617212 | orchestrator | 2026-02-14 05:06:39 | INFO  | Task 18ced67d-1d91-4fa4-b1c9-a4daac47aa9f (loadbalancer) was prepared for execution. 2026-02-14 05:06:39.617333 | orchestrator | 2026-02-14 05:06:39 | INFO  | It takes a moment until task 18ced67d-1d91-4fa4-b1c9-a4daac47aa9f (loadbalancer) has been started and output is visible here. 2026-02-14 05:07:16.050449 | orchestrator | 2026-02-14 05:07:16.050588 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:07:16.050614 | orchestrator | 2026-02-14 05:07:16.050635 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:07:16.050654 | orchestrator | Saturday 14 February 2026 05:06:46 +0000 (0:00:02.520) 0:00:02.520 ***** 2026-02-14 05:07:16.050674 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:16.050693 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:16.050712 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:16.050732 | orchestrator | 2026-02-14 05:07:16.050751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:07:16.050770 | orchestrator | Saturday 14 February 2026 05:06:48 +0000 (0:00:01.789) 0:00:04.310 ***** 2026-02-14 05:07:16.050789 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-14 05:07:16.050807 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-14 05:07:16.050825 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-14 05:07:16.050844 | orchestrator | 2026-02-14 05:07:16.050863 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-14 05:07:16.050883 | orchestrator | 2026-02-14 05:07:16.050902 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-14 05:07:16.050920 | orchestrator | Saturday 14 February 2026 05:06:51 +0000 (0:00:02.824) 0:00:07.134 ***** 2026-02-14 05:07:16.051002 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:07:16.051027 | orchestrator | 2026-02-14 05:07:16.051046 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-14 05:07:16.051066 | orchestrator | Saturday 14 February 2026 05:06:53 +0000 (0:00:02.026) 0:00:09.160 ***** 2026-02-14 05:07:16.051085 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:16.051105 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:16.051123 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:16.051142 | orchestrator | 2026-02-14 05:07:16.051161 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-14 05:07:16.051179 | orchestrator | Saturday 14 February 2026 05:06:55 +0000 (0:00:01.939) 0:00:11.100 ***** 2026-02-14 05:07:16.051198 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:16.051216 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:16.051235 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:16.051253 | orchestrator | 2026-02-14 05:07:16.051271 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-14 05:07:16.051290 | orchestrator | Saturday 14 February 2026 05:06:57 +0000 (0:00:02.147) 0:00:13.247 ***** 2026-02-14 05:07:16.051309 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:16.051345 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:16.051379 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:16.051398 | orchestrator | 2026-02-14 05:07:16.051417 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-14 05:07:16.051436 | orchestrator | Saturday 14 February 2026 05:06:59 +0000 (0:00:01.837) 0:00:15.085 ***** 2026-02-14 05:07:16.051454 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:07:16.051472 | orchestrator | 2026-02-14 05:07:16.051490 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-14 05:07:16.051508 | orchestrator | Saturday 14 February 2026 05:07:01 +0000 (0:00:01.996) 0:00:17.081 ***** 2026-02-14 05:07:16.051558 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:16.051577 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:16.051595 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:16.051613 | orchestrator | 2026-02-14 05:07:16.051632 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-14 05:07:16.051650 | orchestrator | Saturday 14 February 2026 05:07:02 +0000 (0:00:01.667) 0:00:18.749 ***** 2026-02-14 05:07:16.051670 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051687 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051704 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051721 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051738 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051756 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 05:07:16.051776 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 05:07:16.051794 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 05:07:16.051813 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 05:07:16.051832 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-14 05:07:16.051849 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-14 05:07:16.051868 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-14 05:07:16.051886 | orchestrator | 2026-02-14 05:07:16.051918 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-14 05:07:16.051937 | orchestrator | Saturday 14 February 2026 05:07:07 +0000 (0:00:04.173) 0:00:22.923 ***** 2026-02-14 05:07:16.051980 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-14 05:07:16.052000 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-14 05:07:16.052018 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-14 05:07:16.052037 | orchestrator | 2026-02-14 05:07:16.052055 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-14 05:07:16.052094 | orchestrator | Saturday 14 February 2026 05:07:09 +0000 (0:00:02.016) 0:00:24.940 ***** 2026-02-14 05:07:16.052115 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-14 05:07:16.052134 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-14 05:07:16.052153 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-14 05:07:16.052171 | orchestrator | 2026-02-14 05:07:16.052190 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-14 05:07:16.052208 | orchestrator | Saturday 14 February 2026 05:07:11 +0000 (0:00:02.246) 0:00:27.186 ***** 2026-02-14 05:07:16.052227 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-14 05:07:16.052246 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:07:16.052264 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-14 05:07:16.052282 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:07:16.052301 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-14 05:07:16.052319 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:07:16.052338 | orchestrator | 2026-02-14 05:07:16.052357 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-14 05:07:16.052375 | orchestrator | Saturday 14 February 2026 05:07:13 +0000 (0:00:01.949) 0:00:29.136 ***** 2026-02-14 05:07:16.052405 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:16.052448 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:16.052470 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:16.052489 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:16.052508 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:16.052539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:27.318468 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:27.318558 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:27.318564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:27.318569 | orchestrator | 2026-02-14 05:07:27.318574 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-14 05:07:27.318579 | orchestrator | Saturday 14 February 2026 05:07:16 +0000 (0:00:02.778) 0:00:31.915 ***** 2026-02-14 05:07:27.318583 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:27.318588 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:27.318592 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:27.318596 | orchestrator | 2026-02-14 05:07:27.318601 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-14 05:07:27.318605 | orchestrator | Saturday 14 February 2026 05:07:18 +0000 (0:00:02.009) 0:00:33.925 ***** 2026-02-14 05:07:27.318609 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-14 05:07:27.318614 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-14 05:07:27.318618 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-14 05:07:27.318621 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-14 05:07:27.318625 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-14 05:07:27.318629 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-14 05:07:27.318633 | orchestrator | 2026-02-14 05:07:27.318637 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-14 05:07:27.318641 | orchestrator | Saturday 14 February 2026 05:07:20 +0000 (0:00:02.924) 0:00:36.849 ***** 2026-02-14 05:07:27.318645 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:27.318649 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:27.318653 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:27.318656 | orchestrator | 2026-02-14 05:07:27.318660 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-14 05:07:27.318664 | orchestrator | Saturday 14 February 2026 05:07:23 +0000 (0:00:02.311) 0:00:39.161 ***** 2026-02-14 05:07:27.318668 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:07:27.318672 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:07:27.318676 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:07:27.318680 | orchestrator | 2026-02-14 05:07:27.318684 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-14 05:07:27.318688 | orchestrator | Saturday 14 February 2026 05:07:25 +0000 (0:00:02.238) 0:00:41.400 ***** 2026-02-14 05:07:27.318692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 05:07:27.318711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:07:27.318720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:27.318726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:27.318730 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:07:27.318735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 05:07:27.318739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:07:27.318743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:27.318750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:27.318755 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:07:27.318765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 05:07:31.491528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:07:31.491635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:31.491651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:31.491663 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:07:31.491676 | orchestrator | 2026-02-14 05:07:31.491688 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-14 05:07:31.491701 | orchestrator | Saturday 14 February 2026 05:07:27 +0000 (0:00:01.783) 0:00:43.183 ***** 2026-02-14 05:07:31.491712 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491746 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:31.491831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:31.491843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491862 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:31.491874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:31.491900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:45.835317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:07:45.835433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40', '__omit_place_holder__03e8c55a6fcf08c820e0a3499501319d2b38be40'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-14 05:07:45.835450 | orchestrator | 2026-02-14 05:07:45.835464 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-14 05:07:45.835477 | orchestrator | Saturday 14 February 2026 05:07:31 +0000 (0:00:04.180) 0:00:47.363 ***** 2026-02-14 05:07:45.835489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:07:45.835621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:45.835651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:45.835676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:07:45.835706 | orchestrator | 2026-02-14 05:07:45.835726 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-14 05:07:45.835744 | orchestrator | Saturday 14 February 2026 05:07:36 +0000 (0:00:04.943) 0:00:52.307 ***** 2026-02-14 05:07:45.835763 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 05:07:45.835784 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 05:07:45.835804 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-14 05:07:45.835824 | orchestrator | 2026-02-14 05:07:45.835845 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-14 05:07:45.835866 | orchestrator | Saturday 14 February 2026 05:07:39 +0000 (0:00:02.752) 0:00:55.059 ***** 2026-02-14 05:07:45.835878 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 05:07:45.835897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 05:07:45.835908 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-14 05:07:45.835919 | orchestrator | 2026-02-14 05:07:45.835930 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-14 05:07:45.835940 | orchestrator | Saturday 14 February 2026 05:07:43 +0000 (0:00:04.475) 0:00:59.535 ***** 2026-02-14 05:07:45.835952 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:07:45.835964 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:07:45.835985 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:06.683483 | orchestrator | 2026-02-14 05:08:06.683602 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-14 05:08:06.683619 | orchestrator | Saturday 14 February 2026 05:07:45 +0000 (0:00:02.170) 0:01:01.705 ***** 2026-02-14 05:08:06.683631 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 05:08:06.683643 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 05:08:06.683654 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-14 05:08:06.683686 | orchestrator | 2026-02-14 05:08:06.683698 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-14 05:08:06.683709 | orchestrator | Saturday 14 February 2026 05:07:48 +0000 (0:00:03.020) 0:01:04.726 ***** 2026-02-14 05:08:06.683719 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 05:08:06.683731 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 05:08:06.683742 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-14 05:08:06.683753 | orchestrator | 2026-02-14 05:08:06.683763 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-14 05:08:06.683792 | orchestrator | Saturday 14 February 2026 05:07:51 +0000 (0:00:02.847) 0:01:07.573 ***** 2026-02-14 05:08:06.683815 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:08:06.683826 | orchestrator | 2026-02-14 05:08:06.683837 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-14 05:08:06.683847 | orchestrator | Saturday 14 February 2026 05:07:53 +0000 (0:00:01.991) 0:01:09.565 ***** 2026-02-14 05:08:06.683859 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-14 05:08:06.683871 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-14 05:08:06.683882 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-14 05:08:06.683892 | orchestrator | 2026-02-14 05:08:06.683903 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-14 05:08:06.683914 | orchestrator | Saturday 14 February 2026 05:07:56 +0000 (0:00:02.720) 0:01:12.285 ***** 2026-02-14 05:08:06.683924 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-14 05:08:06.683935 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-14 05:08:06.683946 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-14 05:08:06.683956 | orchestrator | 2026-02-14 05:08:06.683967 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-14 05:08:06.683978 | orchestrator | Saturday 14 February 2026 05:07:59 +0000 (0:00:02.809) 0:01:15.095 ***** 2026-02-14 05:08:06.683989 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:06.684000 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:06.684012 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:06.684024 | orchestrator | 2026-02-14 05:08:06.684037 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-14 05:08:06.684049 | orchestrator | Saturday 14 February 2026 05:08:00 +0000 (0:00:01.385) 0:01:16.480 ***** 2026-02-14 05:08:06.684063 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:06.684076 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:06.684117 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:06.684139 | orchestrator | 2026-02-14 05:08:06.684161 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-14 05:08:06.684181 | orchestrator | Saturday 14 February 2026 05:08:02 +0000 (0:00:01.935) 0:01:18.416 ***** 2026-02-14 05:08:06.684199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684230 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684274 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684303 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684317 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:06.684332 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:06.684345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:06.684378 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:10.515288 | orchestrator | 2026-02-14 05:08:10.515400 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-14 05:08:10.515418 | orchestrator | Saturday 14 February 2026 05:08:06 +0000 (0:00:04.135) 0:01:22.551 ***** 2026-02-14 05:08:10.515434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 05:08:10.515452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:10.515466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:10.515480 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:10.515495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 05:08:10.515508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:10.515620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:10.515639 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:10.515675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 05:08:10.515689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:10.515704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:10.515719 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:10.515734 | orchestrator | 2026-02-14 05:08:10.515748 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-14 05:08:10.515758 | orchestrator | Saturday 14 February 2026 05:08:08 +0000 (0:00:01.654) 0:01:24.206 ***** 2026-02-14 05:08:10.515767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 05:08:10.515776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:10.515796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:10.515805 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:10.515824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 05:08:22.427935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:22.428053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:22.428070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 05:08:22.428083 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:22.428096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:22.428167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:22.428181 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:22.428193 | orchestrator | 2026-02-14 05:08:22.428204 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-14 05:08:22.428232 | orchestrator | Saturday 14 February 2026 05:08:10 +0000 (0:00:02.178) 0:01:26.385 ***** 2026-02-14 05:08:22.428244 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 05:08:22.428257 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 05:08:22.428267 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-14 05:08:22.428278 | orchestrator | 2026-02-14 05:08:22.428289 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-14 05:08:22.428300 | orchestrator | Saturday 14 February 2026 05:08:12 +0000 (0:00:02.487) 0:01:28.872 ***** 2026-02-14 05:08:22.428311 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 05:08:22.428322 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 05:08:22.428333 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-14 05:08:22.428344 | orchestrator | 2026-02-14 05:08:22.428372 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-14 05:08:22.428384 | orchestrator | Saturday 14 February 2026 05:08:15 +0000 (0:00:02.587) 0:01:31.460 ***** 2026-02-14 05:08:22.428396 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 05:08:22.428407 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 05:08:22.428418 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 05:08:22.428429 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:22.428440 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-14 05:08:22.428451 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 05:08:22.428462 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:22.428475 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-14 05:08:22.428488 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:22.428501 | orchestrator | 2026-02-14 05:08:22.428513 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-14 05:08:22.428527 | orchestrator | Saturday 14 February 2026 05:08:18 +0000 (0:00:02.638) 0:01:34.098 ***** 2026-02-14 05:08:22.428541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:22.428562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:22.428575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:08:22.428594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:22.428616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:26.383767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:08:26.383868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:26.383910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:26.383924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:08:26.383936 | orchestrator | 2026-02-14 05:08:26.383949 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-14 05:08:26.383961 | orchestrator | Saturday 14 February 2026 05:08:22 +0000 (0:00:04.198) 0:01:38.297 ***** 2026-02-14 05:08:26.383974 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:08:26.383985 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:08:26.383996 | orchestrator | } 2026-02-14 05:08:26.384008 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:08:26.384019 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:08:26.384029 | orchestrator | } 2026-02-14 05:08:26.384040 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:08:26.384051 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:08:26.384062 | orchestrator | } 2026-02-14 05:08:26.384073 | orchestrator | 2026-02-14 05:08:26.384084 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:08:26.384115 | orchestrator | Saturday 14 February 2026 05:08:24 +0000 (0:00:01.654) 0:01:39.952 ***** 2026-02-14 05:08:26.384184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 05:08:26.384219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:26.384232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:26.384252 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:26.384264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 05:08:26.384275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:26.384288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:26.384301 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:26.384320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 05:08:26.384334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:08:26.384357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:08:32.070979 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:32.071113 | orchestrator | 2026-02-14 05:08:32.071142 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-14 05:08:32.071238 | orchestrator | Saturday 14 February 2026 05:08:26 +0000 (0:00:02.306) 0:01:42.259 ***** 2026-02-14 05:08:32.071258 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:08:32.071275 | orchestrator | 2026-02-14 05:08:32.071294 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-14 05:08:32.071310 | orchestrator | Saturday 14 February 2026 05:08:28 +0000 (0:00:02.011) 0:01:44.271 ***** 2026-02-14 05:08:32.071334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:32.071360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:32.071382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:32.071424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:32.071472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:32.071524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:32.071546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:32.071559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:32.071580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:32.071594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:32.071624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.776892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.776986 | orchestrator | 2026-02-14 05:08:33.777001 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-14 05:08:33.777013 | orchestrator | Saturday 14 February 2026 05:08:33 +0000 (0:00:04.779) 0:01:49.050 ***** 2026-02-14 05:08:33.777026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:33.777041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:33.777067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.777099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.777110 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:33.777141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:33.777190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:33.777204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.777214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:33.777223 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:33.777238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:33.777258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-14 05:08:33.777276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693570 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:48.693578 | orchestrator | 2026-02-14 05:08:48.693582 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-14 05:08:48.693588 | orchestrator | Saturday 14 February 2026 05:08:34 +0000 (0:00:01.686) 0:01:50.736 ***** 2026-02-14 05:08:48.693593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693605 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:48.693608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693630 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:48.693645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:08:48.693653 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:08:48.693656 | orchestrator | 2026-02-14 05:08:48.693660 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-14 05:08:48.693664 | orchestrator | Saturday 14 February 2026 05:08:37 +0000 (0:00:02.291) 0:01:53.028 ***** 2026-02-14 05:08:48.693668 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:08:48.693673 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:08:48.693676 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:08:48.693680 | orchestrator | 2026-02-14 05:08:48.693684 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-14 05:08:48.693688 | orchestrator | Saturday 14 February 2026 05:08:39 +0000 (0:00:02.328) 0:01:55.357 ***** 2026-02-14 05:08:48.693692 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:08:48.693695 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:08:48.693699 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:08:48.693703 | orchestrator | 2026-02-14 05:08:48.693706 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-14 05:08:48.693710 | orchestrator | Saturday 14 February 2026 05:08:42 +0000 (0:00:02.882) 0:01:58.239 ***** 2026-02-14 05:08:48.693714 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:08:48.693718 | orchestrator | 2026-02-14 05:08:48.693721 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-14 05:08:48.693725 | orchestrator | Saturday 14 February 2026 05:08:44 +0000 (0:00:01.746) 0:01:59.986 ***** 2026-02-14 05:08:48.693740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:48.693746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:48.693767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:08:48.693778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:08:50.369832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.369980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.369998 | orchestrator | 2026-02-14 05:08:50.370013 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-14 05:08:50.370086 | orchestrator | Saturday 14 February 2026 05:08:48 +0000 (0:00:04.576) 0:02:04.562 ***** 2026-02-14 05:08:50.370101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:50.370116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.370128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.370139 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:08:50.370191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:50.370274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.370287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.370299 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:08:50.370311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:08:50.370323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-14 05:08:50.370355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:06.812670 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:06.812778 | orchestrator | 2026-02-14 05:09:06.812795 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-14 05:09:06.812808 | orchestrator | Saturday 14 February 2026 05:08:50 +0000 (0:00:01.676) 0:02:06.239 ***** 2026-02-14 05:09:06.812820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812867 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:06.812878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812901 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:06.812911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:06.812933 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:06.812944 | orchestrator | 2026-02-14 05:09:06.812955 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-14 05:09:06.812966 | orchestrator | Saturday 14 February 2026 05:08:52 +0000 (0:00:01.943) 0:02:08.183 ***** 2026-02-14 05:09:06.812977 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:06.812989 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:06.813000 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:06.813010 | orchestrator | 2026-02-14 05:09:06.813021 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-14 05:09:06.813032 | orchestrator | Saturday 14 February 2026 05:08:54 +0000 (0:00:02.312) 0:02:10.496 ***** 2026-02-14 05:09:06.813042 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:06.813053 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:06.813063 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:06.813074 | orchestrator | 2026-02-14 05:09:06.813105 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-14 05:09:06.813116 | orchestrator | Saturday 14 February 2026 05:08:57 +0000 (0:00:02.852) 0:02:13.348 ***** 2026-02-14 05:09:06.813127 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:06.813138 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:06.813148 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:06.813159 | orchestrator | 2026-02-14 05:09:06.813170 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-14 05:09:06.813181 | orchestrator | Saturday 14 February 2026 05:08:58 +0000 (0:00:01.382) 0:02:14.731 ***** 2026-02-14 05:09:06.813194 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:09:06.813206 | orchestrator | 2026-02-14 05:09:06.813219 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-14 05:09:06.813232 | orchestrator | Saturday 14 February 2026 05:09:00 +0000 (0:00:01.792) 0:02:16.523 ***** 2026-02-14 05:09:06.813277 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 05:09:06.813314 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 05:09:06.813329 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-14 05:09:06.813343 | orchestrator | 2026-02-14 05:09:06.813356 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-14 05:09:06.813378 | orchestrator | Saturday 14 February 2026 05:09:04 +0000 (0:00:03.562) 0:02:20.086 ***** 2026-02-14 05:09:06.813392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 05:09:06.813413 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:06.813427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 05:09:06.813440 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:06.813461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-14 05:09:19.654350 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:19.654444 | orchestrator | 2026-02-14 05:09:19.654457 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-14 05:09:19.654467 | orchestrator | Saturday 14 February 2026 05:09:06 +0000 (0:00:02.595) 0:02:22.681 ***** 2026-02-14 05:09:19.654491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654512 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:19.654520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654554 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:19.654561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-14 05:09:19.654576 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:19.654584 | orchestrator | 2026-02-14 05:09:19.654591 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-14 05:09:19.654599 | orchestrator | Saturday 14 February 2026 05:09:09 +0000 (0:00:02.883) 0:02:25.565 ***** 2026-02-14 05:09:19.654606 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:19.654613 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:19.654620 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:19.654628 | orchestrator | 2026-02-14 05:09:19.654635 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-14 05:09:19.654642 | orchestrator | Saturday 14 February 2026 05:09:11 +0000 (0:00:01.508) 0:02:27.073 ***** 2026-02-14 05:09:19.654649 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:19.654657 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:19.654669 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:19.654681 | orchestrator | 2026-02-14 05:09:19.654692 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-14 05:09:19.654704 | orchestrator | Saturday 14 February 2026 05:09:13 +0000 (0:00:02.617) 0:02:29.691 ***** 2026-02-14 05:09:19.654715 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:09:19.654726 | orchestrator | 2026-02-14 05:09:19.654739 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-14 05:09:19.654751 | orchestrator | Saturday 14 February 2026 05:09:15 +0000 (0:00:01.857) 0:02:31.549 ***** 2026-02-14 05:09:19.654791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:19.654811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:19.654820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:19.654829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:19.654837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:19.654851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.687872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:21.688035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688106 | orchestrator | 2026-02-14 05:09:21.688119 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-14 05:09:21.688131 | orchestrator | Saturday 14 February 2026 05:09:20 +0000 (0:00:05.084) 0:02:36.634 ***** 2026-02-14 05:09:21.688145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:21.688157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:21.688198 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:21.688225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:32.849389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849526 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:32.849539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:32.849592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-14 05:09:32.849643 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:32.849653 | orchestrator | 2026-02-14 05:09:32.849664 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-14 05:09:32.849675 | orchestrator | Saturday 14 February 2026 05:09:22 +0000 (0:00:02.015) 0:02:38.649 ***** 2026-02-14 05:09:32.849685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849708 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:32.849718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849793 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:32.849804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:32.849824 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:32.849833 | orchestrator | 2026-02-14 05:09:32.849843 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-14 05:09:32.849853 | orchestrator | Saturday 14 February 2026 05:09:24 +0000 (0:00:02.025) 0:02:40.675 ***** 2026-02-14 05:09:32.849862 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:32.849873 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:32.849882 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:32.849892 | orchestrator | 2026-02-14 05:09:32.849906 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-14 05:09:32.849916 | orchestrator | Saturday 14 February 2026 05:09:27 +0000 (0:00:02.232) 0:02:42.907 ***** 2026-02-14 05:09:32.849926 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:32.849936 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:32.849945 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:32.849955 | orchestrator | 2026-02-14 05:09:32.849964 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-14 05:09:32.849974 | orchestrator | Saturday 14 February 2026 05:09:29 +0000 (0:00:02.810) 0:02:45.717 ***** 2026-02-14 05:09:32.849983 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:32.849993 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:32.850002 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:32.850012 | orchestrator | 2026-02-14 05:09:32.850075 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-14 05:09:32.850086 | orchestrator | Saturday 14 February 2026 05:09:31 +0000 (0:00:01.593) 0:02:47.311 ***** 2026-02-14 05:09:32.850095 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:32.850105 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:32.850122 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:38.220386 | orchestrator | 2026-02-14 05:09:38.220476 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-14 05:09:38.220487 | orchestrator | Saturday 14 February 2026 05:09:32 +0000 (0:00:01.410) 0:02:48.721 ***** 2026-02-14 05:09:38.220494 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:09:38.220500 | orchestrator | 2026-02-14 05:09:38.220506 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-14 05:09:38.220512 | orchestrator | Saturday 14 February 2026 05:09:34 +0000 (0:00:01.793) 0:02:50.515 ***** 2026-02-14 05:09:38.220523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:38.220585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:38.220596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:38.220650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:38.220676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:38.220693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:09:40.262514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:40.262527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:40.262613 | orchestrator | 2026-02-14 05:09:40.262626 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-14 05:09:40.262638 | orchestrator | Saturday 14 February 2026 05:09:39 +0000 (0:00:04.934) 0:02:55.450 ***** 2026-02-14 05:09:40.262671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:40.262696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:40.262717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.453541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.453673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.453702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.453724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.453738 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:41.453753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:41.453825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:41.453850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.454873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.454927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.454941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:09:41.454960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:41.455006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-14 05:09:56.412654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412791 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:56.412806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-14 05:09:56.412892 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:56.412903 | orchestrator | 2026-02-14 05:09:56.412915 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-14 05:09:56.412928 | orchestrator | Saturday 14 February 2026 05:09:41 +0000 (0:00:01.877) 0:02:57.328 ***** 2026-02-14 05:09:56.412955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.412970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.412983 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:56.412994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.413005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.413016 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:56.413027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.413038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:09:56.413049 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:56.413059 | orchestrator | 2026-02-14 05:09:56.413070 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-14 05:09:56.413081 | orchestrator | Saturday 14 February 2026 05:09:43 +0000 (0:00:02.005) 0:02:59.333 ***** 2026-02-14 05:09:56.413093 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:56.413104 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:56.413115 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:56.413126 | orchestrator | 2026-02-14 05:09:56.413136 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-14 05:09:56.413147 | orchestrator | Saturday 14 February 2026 05:09:45 +0000 (0:00:02.183) 0:03:01.517 ***** 2026-02-14 05:09:56.413158 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:09:56.413168 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:09:56.413179 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:09:56.413192 | orchestrator | 2026-02-14 05:09:56.413205 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-14 05:09:56.413226 | orchestrator | Saturday 14 February 2026 05:09:48 +0000 (0:00:02.954) 0:03:04.471 ***** 2026-02-14 05:09:56.413238 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:09:56.413251 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:09:56.413265 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:09:56.413278 | orchestrator | 2026-02-14 05:09:56.413291 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-14 05:09:56.413310 | orchestrator | Saturday 14 February 2026 05:09:49 +0000 (0:00:01.407) 0:03:05.879 ***** 2026-02-14 05:09:56.413331 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:09:56.413389 | orchestrator | 2026-02-14 05:09:56.413404 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-14 05:09:56.413417 | orchestrator | Saturday 14 February 2026 05:09:51 +0000 (0:00:01.851) 0:03:07.730 ***** 2026-02-14 05:09:56.413451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 05:09:57.669968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 05:09:57.670200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:09:57.670250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-14 05:09:57.670284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:09:57.670314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:10:01.565544 | orchestrator | 2026-02-14 05:10:01.565662 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-14 05:10:01.565695 | orchestrator | Saturday 14 February 2026 05:09:57 +0000 (0:00:05.817) 0:03:13.548 ***** 2026-02-14 05:10:01.565728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 05:10:01.565753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:10:01.565785 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:01.565816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 05:10:01.565834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:10:01.565845 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:01.565864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-14 05:10:20.930484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-14 05:10:20.930582 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:20.930594 | orchestrator | 2026-02-14 05:10:20.930603 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-14 05:10:20.930613 | orchestrator | Saturday 14 February 2026 05:10:02 +0000 (0:00:05.064) 0:03:18.612 ***** 2026-02-14 05:10:20.930622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930663 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:20.930672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930705 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:20.930719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-14 05:10:20.930736 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:20.930745 | orchestrator | 2026-02-14 05:10:20.930753 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-14 05:10:20.930761 | orchestrator | Saturday 14 February 2026 05:10:07 +0000 (0:00:04.906) 0:03:23.519 ***** 2026-02-14 05:10:20.930769 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:20.930778 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:20.930785 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:20.930793 | orchestrator | 2026-02-14 05:10:20.930801 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-14 05:10:20.930809 | orchestrator | Saturday 14 February 2026 05:10:09 +0000 (0:00:02.297) 0:03:25.817 ***** 2026-02-14 05:10:20.930822 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:20.930830 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:20.930837 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:20.930845 | orchestrator | 2026-02-14 05:10:20.930853 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-14 05:10:20.930861 | orchestrator | Saturday 14 February 2026 05:10:13 +0000 (0:00:03.096) 0:03:28.914 ***** 2026-02-14 05:10:20.930869 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:20.930876 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:20.930884 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:20.930892 | orchestrator | 2026-02-14 05:10:20.930900 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-14 05:10:20.930907 | orchestrator | Saturday 14 February 2026 05:10:14 +0000 (0:00:01.371) 0:03:30.285 ***** 2026-02-14 05:10:20.930915 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:10:20.930923 | orchestrator | 2026-02-14 05:10:20.930931 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-14 05:10:20.930938 | orchestrator | Saturday 14 February 2026 05:10:16 +0000 (0:00:01.747) 0:03:32.033 ***** 2026-02-14 05:10:20.930948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:10:20.930962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:10:37.814193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:10:37.814316 | orchestrator | 2026-02-14 05:10:37.814334 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-14 05:10:37.814347 | orchestrator | Saturday 14 February 2026 05:10:20 +0000 (0:00:04.770) 0:03:36.803 ***** 2026-02-14 05:10:37.814360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:10:37.814395 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:37.814408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:10:37.814420 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:37.814431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:10:37.814497 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:37.814512 | orchestrator | 2026-02-14 05:10:37.814524 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-14 05:10:37.814535 | orchestrator | Saturday 14 February 2026 05:10:22 +0000 (0:00:01.724) 0:03:38.528 ***** 2026-02-14 05:10:37.814547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814574 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:37.814610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814641 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:37.814654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:10:37.814689 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:37.814701 | orchestrator | 2026-02-14 05:10:37.814713 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-14 05:10:37.814726 | orchestrator | Saturday 14 February 2026 05:10:24 +0000 (0:00:01.576) 0:03:40.105 ***** 2026-02-14 05:10:37.814738 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:37.814751 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:37.814763 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:37.814775 | orchestrator | 2026-02-14 05:10:37.814787 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-14 05:10:37.814800 | orchestrator | Saturday 14 February 2026 05:10:26 +0000 (0:00:02.372) 0:03:42.477 ***** 2026-02-14 05:10:37.814812 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:37.814824 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:37.814835 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:37.814847 | orchestrator | 2026-02-14 05:10:37.814900 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-14 05:10:37.814913 | orchestrator | Saturday 14 February 2026 05:10:29 +0000 (0:00:02.910) 0:03:45.388 ***** 2026-02-14 05:10:37.814938 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:37.814951 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:37.814964 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:37.814976 | orchestrator | 2026-02-14 05:10:37.814990 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-14 05:10:37.815002 | orchestrator | Saturday 14 February 2026 05:10:30 +0000 (0:00:01.394) 0:03:46.783 ***** 2026-02-14 05:10:37.815014 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:10:37.815025 | orchestrator | 2026-02-14 05:10:37.815036 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-14 05:10:37.815046 | orchestrator | Saturday 14 February 2026 05:10:32 +0000 (0:00:01.733) 0:03:48.516 ***** 2026-02-14 05:10:37.815078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 05:10:39.633908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 05:10:39.634135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-14 05:10:39.634185 | orchestrator | 2026-02-14 05:10:39.634200 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-14 05:10:39.634212 | orchestrator | Saturday 14 February 2026 05:10:37 +0000 (0:00:05.171) 0:03:53.687 ***** 2026-02-14 05:10:39.634225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 05:10:39.634239 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:39.634262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 05:10:48.726691 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:48.726909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-14 05:10:48.726969 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:48.726983 | orchestrator | 2026-02-14 05:10:48.726995 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-14 05:10:48.727008 | orchestrator | Saturday 14 February 2026 05:10:39 +0000 (0:00:01.822) 0:03:55.510 ***** 2026-02-14 05:10:48.727020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 05:10:48.727091 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:48.727121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 05:10:48.727196 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:48.727208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-14 05:10:48.727230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-14 05:10:48.727241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-14 05:10:48.727257 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:48.727268 | orchestrator | 2026-02-14 05:10:48.727280 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-14 05:10:48.727290 | orchestrator | Saturday 14 February 2026 05:10:41 +0000 (0:00:02.050) 0:03:57.560 ***** 2026-02-14 05:10:48.727301 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:48.727312 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:48.727323 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:48.727334 | orchestrator | 2026-02-14 05:10:48.727344 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-14 05:10:48.727355 | orchestrator | Saturday 14 February 2026 05:10:43 +0000 (0:00:02.274) 0:03:59.834 ***** 2026-02-14 05:10:48.727366 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:10:48.727377 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:10:48.727388 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:10:48.727398 | orchestrator | 2026-02-14 05:10:48.727409 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-14 05:10:48.727420 | orchestrator | Saturday 14 February 2026 05:10:47 +0000 (0:00:03.154) 0:04:02.988 ***** 2026-02-14 05:10:48.727431 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:48.727441 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:48.727452 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:48.727462 | orchestrator | 2026-02-14 05:10:48.727500 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-14 05:10:48.727512 | orchestrator | Saturday 14 February 2026 05:10:48 +0000 (0:00:01.399) 0:04:04.388 ***** 2026-02-14 05:10:48.727530 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:10:59.150671 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:10:59.150818 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:10:59.150831 | orchestrator | 2026-02-14 05:10:59.150844 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-14 05:10:59.150856 | orchestrator | Saturday 14 February 2026 05:10:49 +0000 (0:00:01.354) 0:04:05.743 ***** 2026-02-14 05:10:59.150866 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:10:59.150876 | orchestrator | 2026-02-14 05:10:59.150886 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-14 05:10:59.150896 | orchestrator | Saturday 14 February 2026 05:10:52 +0000 (0:00:02.194) 0:04:07.938 ***** 2026-02-14 05:10:59.150912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-14 05:10:59.150968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:10:59.150982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:10:59.151011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-14 05:10:59.151042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:10:59.151054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:10:59.151072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-14 05:10:59.151083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:10:59.151098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:10:59.151111 | orchestrator | 2026-02-14 05:10:59.151122 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-14 05:10:59.151134 | orchestrator | Saturday 14 February 2026 05:10:57 +0000 (0:00:05.016) 0:04:12.954 ***** 2026-02-14 05:10:59.151154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-14 05:11:01.025797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-14 05:11:01.025874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:11:01.025881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:11:01.025905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:11:01.025910 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:01.025915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:11:01.025931 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:01.025947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-14 05:11:01.025952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-14 05:11:01.025956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-14 05:11:01.025960 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:01.025964 | orchestrator | 2026-02-14 05:11:01.025969 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-14 05:11:01.025974 | orchestrator | Saturday 14 February 2026 05:10:59 +0000 (0:00:02.061) 0:04:15.016 ***** 2026-02-14 05:11:01.025982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.025988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.025993 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:01.025998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.026002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.026009 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:01.026013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.026051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-14 05:11:01.026056 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:01.026060 | orchestrator | 2026-02-14 05:11:01.026064 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-14 05:11:01.026071 | orchestrator | Saturday 14 February 2026 05:11:01 +0000 (0:00:01.882) 0:04:16.899 ***** 2026-02-14 05:11:16.406269 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:16.406392 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:16.406408 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:16.406420 | orchestrator | 2026-02-14 05:11:16.406433 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-14 05:11:16.406445 | orchestrator | Saturday 14 February 2026 05:11:03 +0000 (0:00:02.301) 0:04:19.201 ***** 2026-02-14 05:11:16.406457 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:16.406468 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:16.406479 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:16.406490 | orchestrator | 2026-02-14 05:11:16.406501 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-14 05:11:16.406512 | orchestrator | Saturday 14 February 2026 05:11:06 +0000 (0:00:03.157) 0:04:22.359 ***** 2026-02-14 05:11:16.406578 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:16.406592 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:16.406603 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:16.406614 | orchestrator | 2026-02-14 05:11:16.406625 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-14 05:11:16.406637 | orchestrator | Saturday 14 February 2026 05:11:07 +0000 (0:00:01.423) 0:04:23.783 ***** 2026-02-14 05:11:16.406648 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:11:16.406660 | orchestrator | 2026-02-14 05:11:16.406671 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-14 05:11:16.406682 | orchestrator | Saturday 14 February 2026 05:11:09 +0000 (0:00:01.831) 0:04:25.615 ***** 2026-02-14 05:11:16.406697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:16.406733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:16.406769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:16.406803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:16.406819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:16.406833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:16.406854 | orchestrator | 2026-02-14 05:11:16.406873 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-14 05:11:16.406888 | orchestrator | Saturday 14 February 2026 05:11:14 +0000 (0:00:04.968) 0:04:30.584 ***** 2026-02-14 05:11:16.406902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:16.406925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:29.601859 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:29.602141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:29.602191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:29.602249 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:29.602286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:29.602300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:11:29.602312 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:29.602323 | orchestrator | 2026-02-14 05:11:29.602335 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-14 05:11:29.602347 | orchestrator | Saturday 14 February 2026 05:11:16 +0000 (0:00:01.696) 0:04:32.280 ***** 2026-02-14 05:11:29.602382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602414 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:29.602426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602452 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:29.602465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:29.602497 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:29.602508 | orchestrator | 2026-02-14 05:11:29.602519 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-14 05:11:29.602530 | orchestrator | Saturday 14 February 2026 05:11:18 +0000 (0:00:02.036) 0:04:34.317 ***** 2026-02-14 05:11:29.602541 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:29.602627 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:29.602641 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:29.602651 | orchestrator | 2026-02-14 05:11:29.602662 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-14 05:11:29.602673 | orchestrator | Saturday 14 February 2026 05:11:20 +0000 (0:00:02.290) 0:04:36.607 ***** 2026-02-14 05:11:29.602683 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:29.602694 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:29.602704 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:29.602715 | orchestrator | 2026-02-14 05:11:29.602726 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-14 05:11:29.602743 | orchestrator | Saturday 14 February 2026 05:11:23 +0000 (0:00:02.967) 0:04:39.575 ***** 2026-02-14 05:11:29.602763 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:11:29.602780 | orchestrator | 2026-02-14 05:11:29.602808 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-14 05:11:29.602827 | orchestrator | Saturday 14 February 2026 05:11:25 +0000 (0:00:02.190) 0:04:41.766 ***** 2026-02-14 05:11:29.602848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:29.602873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:29.602910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.347892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:31.348055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:11:31.348067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:31.348172 | orchestrator | 2026-02-14 05:11:31.348186 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-14 05:11:31.348198 | orchestrator | Saturday 14 February 2026 05:11:30 +0000 (0:00:04.797) 0:04:46.563 ***** 2026-02-14 05:11:31.348212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:31.348231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441737 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:34.441769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:34.441782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441858 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:34.441870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:11:34.441887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-14 05:11:34.441921 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:34.441932 | orchestrator | 2026-02-14 05:11:34.441944 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-14 05:11:34.441957 | orchestrator | Saturday 14 February 2026 05:11:32 +0000 (0:00:01.751) 0:04:48.315 ***** 2026-02-14 05:11:34.441978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:34.441992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:34.442005 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:34.442076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:34.442100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:50.282710 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:50.282825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:50.282845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:11:50.282859 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:50.282871 | orchestrator | 2026-02-14 05:11:50.282883 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-14 05:11:50.282895 | orchestrator | Saturday 14 February 2026 05:11:34 +0000 (0:00:01.986) 0:04:50.302 ***** 2026-02-14 05:11:50.282906 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:50.282918 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:50.282929 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:50.282939 | orchestrator | 2026-02-14 05:11:50.282951 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-14 05:11:50.282962 | orchestrator | Saturday 14 February 2026 05:11:36 +0000 (0:00:02.390) 0:04:52.692 ***** 2026-02-14 05:11:50.282973 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:11:50.282983 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:11:50.282994 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:11:50.283005 | orchestrator | 2026-02-14 05:11:50.283015 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-14 05:11:50.283026 | orchestrator | Saturday 14 February 2026 05:11:40 +0000 (0:00:03.210) 0:04:55.903 ***** 2026-02-14 05:11:50.283037 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:11:50.283048 | orchestrator | 2026-02-14 05:11:50.283077 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-14 05:11:50.283088 | orchestrator | Saturday 14 February 2026 05:11:42 +0000 (0:00:02.604) 0:04:58.508 ***** 2026-02-14 05:11:50.283099 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:11:50.283110 | orchestrator | 2026-02-14 05:11:50.283121 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-14 05:11:50.283132 | orchestrator | Saturday 14 February 2026 05:11:46 +0000 (0:00:04.001) 0:05:02.509 ***** 2026-02-14 05:11:50.283148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:50.283204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:11:50.283220 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:50.283235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:50.283248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:11:50.283269 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:50.283325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:53.826986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:11:53.827076 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:11:53.827089 | orchestrator | 2026-02-14 05:11:53.827098 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-14 05:11:53.827107 | orchestrator | Saturday 14 February 2026 05:11:50 +0000 (0:00:03.631) 0:05:06.141 ***** 2026-02-14 05:11:53.827132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:53.827158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:11:53.827167 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:11:53.827208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:53.827219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:11:53.827241 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:11:53.827250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:11:53.827265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-14 05:12:09.878691 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:09.878798 | orchestrator | 2026-02-14 05:12:09.878814 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-14 05:12:09.878827 | orchestrator | Saturday 14 February 2026 05:11:53 +0000 (0:00:03.558) 0:05:09.699 ***** 2026-02-14 05:12:09.878841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878909 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:09.878921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878944 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:09.878955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-14 05:12:09.878978 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:09.878989 | orchestrator | 2026-02-14 05:12:09.879001 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-14 05:12:09.879012 | orchestrator | Saturday 14 February 2026 05:11:57 +0000 (0:00:04.068) 0:05:13.768 ***** 2026-02-14 05:12:09.879023 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:12:09.879051 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:12:09.879063 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:12:09.879074 | orchestrator | 2026-02-14 05:12:09.879085 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-14 05:12:09.879096 | orchestrator | Saturday 14 February 2026 05:12:00 +0000 (0:00:03.050) 0:05:16.819 ***** 2026-02-14 05:12:09.879106 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:09.879117 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:09.879128 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:09.879146 | orchestrator | 2026-02-14 05:12:09.879157 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-14 05:12:09.879167 | orchestrator | Saturday 14 February 2026 05:12:03 +0000 (0:00:02.565) 0:05:19.384 ***** 2026-02-14 05:12:09.879178 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:09.879192 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:09.879204 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:09.879217 | orchestrator | 2026-02-14 05:12:09.879230 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-14 05:12:09.879243 | orchestrator | Saturday 14 February 2026 05:12:04 +0000 (0:00:01.368) 0:05:20.752 ***** 2026-02-14 05:12:09.879256 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:12:09.879269 | orchestrator | 2026-02-14 05:12:09.879287 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-14 05:12:09.879301 | orchestrator | Saturday 14 February 2026 05:12:07 +0000 (0:00:02.323) 0:05:23.076 ***** 2026-02-14 05:12:09.879316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:12:09.879330 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:12:09.879344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:12:09.879357 | orchestrator | 2026-02-14 05:12:09.879369 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-14 05:12:09.879383 | orchestrator | Saturday 14 February 2026 05:12:09 +0000 (0:00:02.550) 0:05:25.627 ***** 2026-02-14 05:12:09.879402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:12:24.806122 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:24.806246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:12:24.806264 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:24.806276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:12:24.806286 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:24.806296 | orchestrator | 2026-02-14 05:12:24.806307 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-14 05:12:24.806319 | orchestrator | Saturday 14 February 2026 05:12:11 +0000 (0:00:01.753) 0:05:27.381 ***** 2026-02-14 05:12:24.806331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 05:12:24.806343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 05:12:24.806354 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:24.806364 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:24.806375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-14 05:12:24.806385 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:24.806395 | orchestrator | 2026-02-14 05:12:24.806406 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-14 05:12:24.806416 | orchestrator | Saturday 14 February 2026 05:12:12 +0000 (0:00:01.445) 0:05:28.826 ***** 2026-02-14 05:12:24.806426 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:24.806436 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:24.806446 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:24.806456 | orchestrator | 2026-02-14 05:12:24.806486 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-14 05:12:24.806496 | orchestrator | Saturday 14 February 2026 05:12:14 +0000 (0:00:01.434) 0:05:30.260 ***** 2026-02-14 05:12:24.806506 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:24.806516 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:24.806526 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:24.806536 | orchestrator | 2026-02-14 05:12:24.806545 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-14 05:12:24.806555 | orchestrator | Saturday 14 February 2026 05:12:16 +0000 (0:00:02.267) 0:05:32.528 ***** 2026-02-14 05:12:24.806565 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:24.806575 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:24.806584 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:24.806594 | orchestrator | 2026-02-14 05:12:24.806604 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-14 05:12:24.806614 | orchestrator | Saturday 14 February 2026 05:12:18 +0000 (0:00:01.677) 0:05:34.206 ***** 2026-02-14 05:12:24.806624 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:12:24.806635 | orchestrator | 2026-02-14 05:12:24.806647 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-14 05:12:24.806687 | orchestrator | Saturday 14 February 2026 05:12:20 +0000 (0:00:02.059) 0:05:36.265 ***** 2026-02-14 05:12:24.806726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:12:24.806744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:24.806757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:24.806778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:24.806800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:24.932325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:24.932417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:24.932433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:24.932446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:24.932477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:24.932490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:24.932519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:24.932537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:24.932551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:24.932572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:12:24.932585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:24.932604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.049706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:25.049815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:25.049856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:12:25.049870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.049912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:25.049927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:25.049939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.049959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:25.049972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:25.049984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:25.050010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:25.193873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.193997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.194063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:25.194082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:25.194097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:25.194135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:25.194169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.194192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:25.194208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:25.194225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:25.194244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:25.194257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:25.194285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:27.525544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.525650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.525745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:27.525781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:27.525793 | orchestrator | 2026-02-14 05:12:27.525806 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-14 05:12:27.525819 | orchestrator | Saturday 14 February 2026 05:12:26 +0000 (0:00:05.975) 0:05:42.241 ***** 2026-02-14 05:12:27.525850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:12:27.525892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.525914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:27.525943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:27.525966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.525997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.526108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.612747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:27.612849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:27.612885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:12:27.612901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.612934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.612966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:27.612980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:27.612993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.613010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:27.613029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.613049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.745387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:27.745466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.745477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:27.745505 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:27.745514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.745522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:27.745541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:27.745583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.745593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:27.745600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:27.745616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:27.745623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:27.745698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:12:29.007535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:29.007639 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:29.007729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:29.007769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-14 05:12:29.007783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-14 05:12:29.007815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:29.007828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:29.007841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:29.007872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-14 05:12:29.007885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:29.007897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:29.007909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-14 05:12:29.007931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-14 05:12:44.025160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-14 05:12:44.025341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-14 05:12:44.025361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-14 05:12:44.025372 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:44.025384 | orchestrator | 2026-02-14 05:12:44.025394 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-14 05:12:44.025404 | orchestrator | Saturday 14 February 2026 05:12:28 +0000 (0:00:02.636) 0:05:44.877 ***** 2026-02-14 05:12:44.025413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025436 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:12:44.025445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025463 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:12:44.025471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:12:44.025519 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:12:44.025528 | orchestrator | 2026-02-14 05:12:44.025537 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-14 05:12:44.025546 | orchestrator | Saturday 14 February 2026 05:12:31 +0000 (0:00:02.965) 0:05:47.843 ***** 2026-02-14 05:12:44.025555 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:12:44.025564 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:12:44.025573 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:12:44.025581 | orchestrator | 2026-02-14 05:12:44.025590 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-14 05:12:44.025599 | orchestrator | Saturday 14 February 2026 05:12:34 +0000 (0:00:02.208) 0:05:50.052 ***** 2026-02-14 05:12:44.025607 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:12:44.025616 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:12:44.025625 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:12:44.025634 | orchestrator | 2026-02-14 05:12:44.025642 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-14 05:12:44.025651 | orchestrator | Saturday 14 February 2026 05:12:37 +0000 (0:00:02.968) 0:05:53.020 ***** 2026-02-14 05:12:44.025659 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:12:44.025668 | orchestrator | 2026-02-14 05:12:44.025677 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-14 05:12:44.025709 | orchestrator | Saturday 14 February 2026 05:12:39 +0000 (0:00:02.292) 0:05:55.312 ***** 2026-02-14 05:12:44.025726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:12:44.025739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:12:44.025758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:13:01.452242 | orchestrator | 2026-02-14 05:13:01.452360 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-14 05:13:01.452377 | orchestrator | Saturday 14 February 2026 05:12:44 +0000 (0:00:04.583) 0:05:59.896 ***** 2026-02-14 05:13:01.452410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:13:01.452427 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:01.452440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:13:01.452452 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:01.452464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:13:01.452497 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:01.452509 | orchestrator | 2026-02-14 05:13:01.452520 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-14 05:13:01.452532 | orchestrator | Saturday 14 February 2026 05:12:45 +0000 (0:00:01.660) 0:06:01.557 ***** 2026-02-14 05:13:01.452544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452589 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:01.452600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452623 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:01.452634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:13:01.452662 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:01.452673 | orchestrator | 2026-02-14 05:13:01.452683 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-14 05:13:01.452694 | orchestrator | Saturday 14 February 2026 05:12:47 +0000 (0:00:01.866) 0:06:03.423 ***** 2026-02-14 05:13:01.452705 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:01.452742 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:01.452755 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:01.452768 | orchestrator | 2026-02-14 05:13:01.452781 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-14 05:13:01.452793 | orchestrator | Saturday 14 February 2026 05:12:49 +0000 (0:00:02.252) 0:06:05.676 ***** 2026-02-14 05:13:01.452806 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:01.452819 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:01.452831 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:01.452843 | orchestrator | 2026-02-14 05:13:01.452856 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-14 05:13:01.452869 | orchestrator | Saturday 14 February 2026 05:12:52 +0000 (0:00:02.935) 0:06:08.611 ***** 2026-02-14 05:13:01.452883 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:13:01.452895 | orchestrator | 2026-02-14 05:13:01.452906 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-14 05:13:01.452926 | orchestrator | Saturday 14 February 2026 05:12:55 +0000 (0:00:02.473) 0:06:11.085 ***** 2026-02-14 05:13:01.452938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:01.452960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:02.662956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:02.663063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:02.663102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:02.663169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:13:02.663243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:02.663267 | orchestrator | 2026-02-14 05:13:02.663280 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-14 05:13:02.663301 | orchestrator | Saturday 14 February 2026 05:13:02 +0000 (0:00:07.449) 0:06:18.534 ***** 2026-02-14 05:13:03.432638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:03.432806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:03.432849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:03.432862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:03.432875 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:03.432910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:03.432932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:03.432952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:03.432964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:03.432975 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:03.432987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:03.433014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:13:25.191899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-14 05:13:25.192030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-14 05:13:25.192047 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:25.192059 | orchestrator | 2026-02-14 05:13:25.192070 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-14 05:13:25.192081 | orchestrator | Saturday 14 February 2026 05:13:04 +0000 (0:00:01.940) 0:06:20.475 ***** 2026-02-14 05:13:25.192093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192139 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:25.192148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192245 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:25.192255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:13:25.192275 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:25.192285 | orchestrator | 2026-02-14 05:13:25.192295 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-14 05:13:25.192304 | orchestrator | Saturday 14 February 2026 05:13:07 +0000 (0:00:02.585) 0:06:23.060 ***** 2026-02-14 05:13:25.192314 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:25.192324 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:25.192333 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:25.192343 | orchestrator | 2026-02-14 05:13:25.192352 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-14 05:13:25.192362 | orchestrator | Saturday 14 February 2026 05:13:09 +0000 (0:00:02.360) 0:06:25.421 ***** 2026-02-14 05:13:25.192371 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:25.192380 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:25.192390 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:25.192402 | orchestrator | 2026-02-14 05:13:25.192413 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-14 05:13:25.192424 | orchestrator | Saturday 14 February 2026 05:13:12 +0000 (0:00:03.119) 0:06:28.540 ***** 2026-02-14 05:13:25.192435 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:13:25.192446 | orchestrator | 2026-02-14 05:13:25.192457 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-14 05:13:25.192468 | orchestrator | Saturday 14 February 2026 05:13:15 +0000 (0:00:02.997) 0:06:31.538 ***** 2026-02-14 05:13:25.192479 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-14 05:13:25.192490 | orchestrator | 2026-02-14 05:13:25.192501 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-14 05:13:25.192512 | orchestrator | Saturday 14 February 2026 05:13:17 +0000 (0:00:01.689) 0:06:33.227 ***** 2026-02-14 05:13:25.192525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 05:13:25.192540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 05:13:25.192559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-14 05:13:25.192571 | orchestrator | 2026-02-14 05:13:25.192581 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-14 05:13:25.192592 | orchestrator | Saturday 14 February 2026 05:13:23 +0000 (0:00:05.694) 0:06:38.921 ***** 2026-02-14 05:13:25.192609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:25.192627 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:49.154465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.154612 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:49.154633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.154646 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:49.154658 | orchestrator | 2026-02-14 05:13:49.154670 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-14 05:13:49.154683 | orchestrator | Saturday 14 February 2026 05:13:25 +0000 (0:00:02.139) 0:06:41.061 ***** 2026-02-14 05:13:49.154696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154723 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:49.154734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154757 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:49.154839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-14 05:13:49.154865 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:49.154876 | orchestrator | 2026-02-14 05:13:49.154888 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 05:13:49.154899 | orchestrator | Saturday 14 February 2026 05:13:27 +0000 (0:00:02.614) 0:06:43.676 ***** 2026-02-14 05:13:49.154910 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:49.154921 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:49.154932 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:49.154943 | orchestrator | 2026-02-14 05:13:49.154956 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 05:13:49.154968 | orchestrator | Saturday 14 February 2026 05:13:32 +0000 (0:00:04.668) 0:06:48.344 ***** 2026-02-14 05:13:49.154981 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:49.154993 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:49.155005 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:49.155017 | orchestrator | 2026-02-14 05:13:49.155030 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-14 05:13:49.155042 | orchestrator | Saturday 14 February 2026 05:13:36 +0000 (0:00:03.955) 0:06:52.300 ***** 2026-02-14 05:13:49.155056 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-14 05:13:49.155070 | orchestrator | 2026-02-14 05:13:49.155085 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-14 05:13:49.155105 | orchestrator | Saturday 14 February 2026 05:13:38 +0000 (0:00:01.723) 0:06:54.024 ***** 2026-02-14 05:13:49.155166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155190 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:49.155210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155230 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:49.155250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155270 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:49.155302 | orchestrator | 2026-02-14 05:13:49.155322 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-14 05:13:49.155342 | orchestrator | Saturday 14 February 2026 05:13:40 +0000 (0:00:02.419) 0:06:56.444 ***** 2026-02-14 05:13:49.155354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155365 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:49.155376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155388 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:49.155399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-14 05:13:49.155410 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:49.155420 | orchestrator | 2026-02-14 05:13:49.155431 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-14 05:13:49.155442 | orchestrator | Saturday 14 February 2026 05:13:43 +0000 (0:00:02.519) 0:06:58.963 ***** 2026-02-14 05:13:49.155453 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:13:49.155464 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:13:49.155474 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:13:49.155485 | orchestrator | 2026-02-14 05:13:49.155496 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 05:13:49.155507 | orchestrator | Saturday 14 February 2026 05:13:45 +0000 (0:00:02.419) 0:07:01.382 ***** 2026-02-14 05:13:49.155521 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:13:49.155540 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:13:49.155558 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:13:49.155582 | orchestrator | 2026-02-14 05:13:49.155615 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 05:13:49.155636 | orchestrator | Saturday 14 February 2026 05:13:49 +0000 (0:00:03.644) 0:07:05.027 ***** 2026-02-14 05:14:17.352936 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:14:17.353057 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:14:17.353075 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:14:17.353088 | orchestrator | 2026-02-14 05:14:17.353101 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-14 05:14:17.353114 | orchestrator | Saturday 14 February 2026 05:13:53 +0000 (0:00:03.966) 0:07:08.993 ***** 2026-02-14 05:14:17.353125 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-14 05:14:17.353138 | orchestrator | 2026-02-14 05:14:17.353149 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-14 05:14:17.353161 | orchestrator | Saturday 14 February 2026 05:13:55 +0000 (0:00:02.317) 0:07:11.311 ***** 2026-02-14 05:14:17.353198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353214 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:17.353226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353237 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:17.353248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353260 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:17.353271 | orchestrator | 2026-02-14 05:14:17.353282 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-14 05:14:17.353294 | orchestrator | Saturday 14 February 2026 05:13:57 +0000 (0:00:02.530) 0:07:13.841 ***** 2026-02-14 05:14:17.353305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353316 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:17.353327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353339 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:17.353382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-14 05:14:17.353401 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:17.353420 | orchestrator | 2026-02-14 05:14:17.353434 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-14 05:14:17.353447 | orchestrator | Saturday 14 February 2026 05:14:00 +0000 (0:00:02.530) 0:07:16.371 ***** 2026-02-14 05:14:17.353459 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:17.353472 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:17.353483 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:17.353495 | orchestrator | 2026-02-14 05:14:17.353508 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-14 05:14:17.353520 | orchestrator | Saturday 14 February 2026 05:14:02 +0000 (0:00:02.442) 0:07:18.814 ***** 2026-02-14 05:14:17.353532 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:14:17.353545 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:14:17.353557 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:14:17.353569 | orchestrator | 2026-02-14 05:14:17.353581 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-14 05:14:17.353593 | orchestrator | Saturday 14 February 2026 05:14:06 +0000 (0:00:03.483) 0:07:22.297 ***** 2026-02-14 05:14:17.353605 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:14:17.353617 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:14:17.353630 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:14:17.353642 | orchestrator | 2026-02-14 05:14:17.353655 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-14 05:14:17.353668 | orchestrator | Saturday 14 February 2026 05:14:10 +0000 (0:00:04.339) 0:07:26.637 ***** 2026-02-14 05:14:17.353680 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:14:17.353693 | orchestrator | 2026-02-14 05:14:17.353706 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-14 05:14:17.353718 | orchestrator | Saturday 14 February 2026 05:14:13 +0000 (0:00:02.415) 0:07:29.052 ***** 2026-02-14 05:14:17.353733 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 05:14:17.353748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:17.353762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:17.353798 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 05:14:19.499668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:19.499792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:19.499819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:19.499926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:19.499949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:19.500014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:19.500049 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-14 05:14:19.500062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:19.500080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:19.500099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:19.500119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:19.500148 | orchestrator | 2026-02-14 05:14:19.500166 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-14 05:14:19.500186 | orchestrator | Saturday 14 February 2026 05:14:18 +0000 (0:00:05.338) 0:07:34.391 ***** 2026-02-14 05:14:19.500220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 05:14:20.693612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:20.693717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:20.693734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:20.693749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:20.693885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 05:14:20.693906 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:20.693920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:20.693952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:20.693965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:20.693976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:20.693988 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:20.694000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-14 05:14:20.694134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-14 05:14:20.694160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-14 05:14:37.682625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-14 05:14:37.682772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-14 05:14:37.682801 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:37.682823 | orchestrator | 2026-02-14 05:14:37.682875 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-14 05:14:37.682898 | orchestrator | Saturday 14 February 2026 05:14:20 +0000 (0:00:02.179) 0:07:36.571 ***** 2026-02-14 05:14:37.682934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.682948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.682988 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:37.683001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.683012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.683023 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:37.683034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.683045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-14 05:14:37.683056 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:37.683067 | orchestrator | 2026-02-14 05:14:37.683078 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-14 05:14:37.683089 | orchestrator | Saturday 14 February 2026 05:14:22 +0000 (0:00:02.181) 0:07:38.752 ***** 2026-02-14 05:14:37.683100 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:14:37.683113 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:14:37.683125 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:14:37.683137 | orchestrator | 2026-02-14 05:14:37.683150 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-14 05:14:37.683163 | orchestrator | Saturday 14 February 2026 05:14:25 +0000 (0:00:02.261) 0:07:41.014 ***** 2026-02-14 05:14:37.683176 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:14:37.683187 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:14:37.683217 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:14:37.683230 | orchestrator | 2026-02-14 05:14:37.683242 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-14 05:14:37.683255 | orchestrator | Saturday 14 February 2026 05:14:28 +0000 (0:00:03.124) 0:07:44.138 ***** 2026-02-14 05:14:37.683267 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:14:37.683281 | orchestrator | 2026-02-14 05:14:37.683293 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-14 05:14:37.683305 | orchestrator | Saturday 14 February 2026 05:14:30 +0000 (0:00:02.540) 0:07:46.679 ***** 2026-02-14 05:14:37.683338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:37.683355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:37.683375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:37.683406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:14:37.683431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:14:41.762490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:14:41.762607 | orchestrator | 2026-02-14 05:14:41.762622 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-14 05:14:41.762632 | orchestrator | Saturday 14 February 2026 05:14:37 +0000 (0:00:06.874) 0:07:53.554 ***** 2026-02-14 05:14:41.762642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:41.762665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:14:41.762674 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:41.762700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:41.762715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:14:41.762724 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:41.762733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:41.762745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:14:41.762753 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:41.762762 | orchestrator | 2026-02-14 05:14:41.762770 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-14 05:14:41.762783 | orchestrator | Saturday 14 February 2026 05:14:39 +0000 (0:00:02.256) 0:07:55.810 ***** 2026-02-14 05:14:41.762793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:14:41.762808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.791745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.791851 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:50.791922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:14:50.791942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.791957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.791966 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:50.791975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:14:50.791984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.791993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-14 05:14:50.792002 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:50.792011 | orchestrator | 2026-02-14 05:14:50.792020 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-14 05:14:50.792030 | orchestrator | Saturday 14 February 2026 05:14:41 +0000 (0:00:01.831) 0:07:57.641 ***** 2026-02-14 05:14:50.792039 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:50.792063 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:50.792072 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:50.792081 | orchestrator | 2026-02-14 05:14:50.792090 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-14 05:14:50.792098 | orchestrator | Saturday 14 February 2026 05:14:43 +0000 (0:00:01.469) 0:07:59.111 ***** 2026-02-14 05:14:50.792107 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:50.792116 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:50.792124 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:14:50.792150 | orchestrator | 2026-02-14 05:14:50.792159 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-14 05:14:50.792168 | orchestrator | Saturday 14 February 2026 05:14:45 +0000 (0:00:02.351) 0:08:01.463 ***** 2026-02-14 05:14:50.792176 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:14:50.792186 | orchestrator | 2026-02-14 05:14:50.792194 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-14 05:14:50.792203 | orchestrator | Saturday 14 February 2026 05:14:48 +0000 (0:00:02.623) 0:08:04.087 ***** 2026-02-14 05:14:50.792233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-14 05:14:50.792246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:50.792257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:50.792267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:50.792276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:50.792293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-14 05:14:50.792311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:50.792329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-14 05:14:52.786581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:52.786618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:52.786631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:52.786688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:52.786706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:52.786725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:52.786748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:14:52.786769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:54.964985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:54.965163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:14:54.965197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:54.965220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:54.965242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:54.965289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:14:54.965309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:54.965343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:54.965356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:14:54.965367 | orchestrator | 2026-02-14 05:14:54.965380 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-14 05:14:54.965393 | orchestrator | Saturday 14 February 2026 05:14:54 +0000 (0:00:05.811) 0:08:09.898 ***** 2026-02-14 05:14:54.965405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-14 05:14:54.965417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:54.965439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:55.135723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:55.135739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:55.135752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:14:55.135840 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:14:55.135859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-14 05:14:55.135918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:55.135931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:55.135954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:55.135984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:56.388290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:56.388396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:56.388414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-14 05:14:56.388427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:56.388459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-14 05:14:56.388496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:14:56.388511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:56.388524 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:14:56.388536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:14:56.388548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-14 05:14:56.388561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:14:56.388582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-14 05:14:56.388606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:15:08.686186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:15:08.686293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-14 05:15:08.686312 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:08.686327 | orchestrator | 2026-02-14 05:15:08.686346 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-14 05:15:08.686367 | orchestrator | Saturday 14 February 2026 05:14:56 +0000 (0:00:02.366) 0:08:12.264 ***** 2026-02-14 05:15:08.686389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686508 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:08.686530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686653 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:08.686674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-14 05:15:08.686719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-14 05:15:08.686761 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:08.686795 | orchestrator | 2026-02-14 05:15:08.686817 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-14 05:15:08.686837 | orchestrator | Saturday 14 February 2026 05:14:58 +0000 (0:00:01.837) 0:08:14.101 ***** 2026-02-14 05:15:08.686856 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:08.686876 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:08.686933 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:08.686954 | orchestrator | 2026-02-14 05:15:08.686974 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-14 05:15:08.686994 | orchestrator | Saturday 14 February 2026 05:15:00 +0000 (0:00:02.030) 0:08:16.132 ***** 2026-02-14 05:15:08.687014 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:08.687034 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:08.687054 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:08.687073 | orchestrator | 2026-02-14 05:15:08.687093 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-14 05:15:08.687114 | orchestrator | Saturday 14 February 2026 05:15:02 +0000 (0:00:02.245) 0:08:18.378 ***** 2026-02-14 05:15:08.687133 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:15:08.687153 | orchestrator | 2026-02-14 05:15:08.687173 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-14 05:15:08.687193 | orchestrator | Saturday 14 February 2026 05:15:04 +0000 (0:00:02.355) 0:08:20.733 ***** 2026-02-14 05:15:08.687216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:15:08.687268 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:15:26.486656 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:15:26.486817 | orchestrator | 2026-02-14 05:15:26.486842 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-14 05:15:26.486860 | orchestrator | Saturday 14 February 2026 05:15:08 +0000 (0:00:03.822) 0:08:24.555 ***** 2026-02-14 05:15:26.486881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:15:26.486901 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:26.486976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:15:26.486995 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:26.487055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:15:26.487076 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:26.487093 | orchestrator | 2026-02-14 05:15:26.487109 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-14 05:15:26.487139 | orchestrator | Saturday 14 February 2026 05:15:10 +0000 (0:00:01.422) 0:08:25.978 ***** 2026-02-14 05:15:26.487159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 05:15:26.487177 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:26.487196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 05:15:26.487213 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:26.487231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-14 05:15:26.487249 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:26.487270 | orchestrator | 2026-02-14 05:15:26.487290 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-14 05:15:26.487308 | orchestrator | Saturday 14 February 2026 05:15:11 +0000 (0:00:01.425) 0:08:27.404 ***** 2026-02-14 05:15:26.487325 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:26.487342 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:26.487361 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:26.487378 | orchestrator | 2026-02-14 05:15:26.487396 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-14 05:15:26.487413 | orchestrator | Saturday 14 February 2026 05:15:13 +0000 (0:00:01.822) 0:08:29.226 ***** 2026-02-14 05:15:26.487432 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:26.487451 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:26.487467 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:26.487485 | orchestrator | 2026-02-14 05:15:26.487502 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-14 05:15:26.487519 | orchestrator | Saturday 14 February 2026 05:15:15 +0000 (0:00:02.267) 0:08:31.493 ***** 2026-02-14 05:15:26.487538 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:15:26.487556 | orchestrator | 2026-02-14 05:15:26.487574 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-14 05:15:26.487590 | orchestrator | Saturday 14 February 2026 05:15:17 +0000 (0:00:02.392) 0:08:33.886 ***** 2026-02-14 05:15:26.487609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-14 05:15:26.487636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-14 05:15:26.487676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-14 05:15:28.177283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:15:28.177419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:15:28.177465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-14 05:15:28.177513 | orchestrator | 2026-02-14 05:15:28.177534 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-14 05:15:28.177553 | orchestrator | Saturday 14 February 2026 05:15:26 +0000 (0:00:08.470) 0:08:42.357 ***** 2026-02-14 05:15:28.177599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-14 05:15:28.177619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:15:28.177638 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:28.177656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-14 05:15:28.177687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:15:28.177705 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:28.177734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-14 05:15:50.026391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-14 05:15:50.026499 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.026516 | orchestrator | 2026-02-14 05:15:50.026528 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-14 05:15:50.026539 | orchestrator | Saturday 14 February 2026 05:15:28 +0000 (0:00:01.695) 0:08:44.053 ***** 2026-02-14 05:15:50.026550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026645 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.026663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026732 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.026742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-14 05:15:50.026770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-14 05:15:50.026791 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.026801 | orchestrator | 2026-02-14 05:15:50.026811 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-14 05:15:50.026821 | orchestrator | Saturday 14 February 2026 05:15:30 +0000 (0:00:02.068) 0:08:46.121 ***** 2026-02-14 05:15:50.026830 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:15:50.026841 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:15:50.026850 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:15:50.026860 | orchestrator | 2026-02-14 05:15:50.026870 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-14 05:15:50.026890 | orchestrator | Saturday 14 February 2026 05:15:32 +0000 (0:00:02.260) 0:08:48.382 ***** 2026-02-14 05:15:50.026902 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:15:50.026914 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:15:50.026925 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:15:50.026975 | orchestrator | 2026-02-14 05:15:50.026993 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-14 05:15:50.027012 | orchestrator | Saturday 14 February 2026 05:15:35 +0000 (0:00:02.983) 0:08:51.366 ***** 2026-02-14 05:15:50.027028 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.027043 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.027059 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.027077 | orchestrator | 2026-02-14 05:15:50.027094 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-14 05:15:50.027107 | orchestrator | Saturday 14 February 2026 05:15:36 +0000 (0:00:01.478) 0:08:52.844 ***** 2026-02-14 05:15:50.027118 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.027129 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.027140 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.027150 | orchestrator | 2026-02-14 05:15:50.027162 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-14 05:15:50.027172 | orchestrator | Saturday 14 February 2026 05:15:38 +0000 (0:00:01.561) 0:08:54.405 ***** 2026-02-14 05:15:50.027183 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.027194 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.027205 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.027216 | orchestrator | 2026-02-14 05:15:50.027226 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-14 05:15:50.027241 | orchestrator | Saturday 14 February 2026 05:15:40 +0000 (0:00:01.834) 0:08:56.240 ***** 2026-02-14 05:15:50.027251 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.027261 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.027270 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.027280 | orchestrator | 2026-02-14 05:15:50.027289 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-14 05:15:50.027300 | orchestrator | Saturday 14 February 2026 05:15:41 +0000 (0:00:01.386) 0:08:57.626 ***** 2026-02-14 05:15:50.027316 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:50.027339 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:15:50.027358 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:15:50.027374 | orchestrator | 2026-02-14 05:15:50.027389 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-14 05:15:50.027404 | orchestrator | Saturday 14 February 2026 05:15:43 +0000 (0:00:01.487) 0:08:59.114 ***** 2026-02-14 05:15:50.027421 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:15:50.027435 | orchestrator | 2026-02-14 05:15:50.027450 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-14 05:15:50.027465 | orchestrator | Saturday 14 February 2026 05:15:46 +0000 (0:00:02.827) 0:09:01.941 ***** 2026-02-14 05:15:50.027483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-14 05:15:50.027516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-14 05:15:54.505863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-14 05:15:54.506100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:15:54.506137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:15:54.506151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-14 05:15:54.506163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:15:54.506175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:15:54.506229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-14 05:15:54.506243 | orchestrator | 2026-02-14 05:15:54.506256 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-14 05:15:54.506268 | orchestrator | Saturday 14 February 2026 05:15:50 +0000 (0:00:03.955) 0:09:05.897 ***** 2026-02-14 05:15:54.506280 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:15:54.506291 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:15:54.506302 | orchestrator | } 2026-02-14 05:15:54.506313 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:15:54.506324 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:15:54.506335 | orchestrator | } 2026-02-14 05:15:54.506345 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:15:54.506356 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:15:54.506367 | orchestrator | } 2026-02-14 05:15:54.506377 | orchestrator | 2026-02-14 05:15:54.506389 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:15:54.506408 | orchestrator | Saturday 14 February 2026 05:15:51 +0000 (0:00:01.422) 0:09:07.319 ***** 2026-02-14 05:15:54.506428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-14 05:15:54.506455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:15:54.506475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:15:54.506495 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:15:54.506516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-14 05:15:54.506550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:15:54.506580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:17:54.222268 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.222392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-14 05:17:54.222411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-14 05:17:54.222438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-14 05:17:54.222450 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.222460 | orchestrator | 2026-02-14 05:17:54.222471 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-14 05:17:54.222483 | orchestrator | Saturday 14 February 2026 05:15:54 +0000 (0:00:03.055) 0:09:10.375 ***** 2026-02-14 05:17:54.222492 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.222526 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.222536 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.222546 | orchestrator | 2026-02-14 05:17:54.222556 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-14 05:17:54.222565 | orchestrator | Saturday 14 February 2026 05:15:56 +0000 (0:00:01.809) 0:09:12.184 ***** 2026-02-14 05:17:54.222575 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.222584 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.222594 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.222603 | orchestrator | 2026-02-14 05:17:54.222613 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-14 05:17:54.222623 | orchestrator | Saturday 14 February 2026 05:15:57 +0000 (0:00:01.464) 0:09:13.649 ***** 2026-02-14 05:17:54.222632 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.222642 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.222652 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.222661 | orchestrator | 2026-02-14 05:17:54.222671 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-14 05:17:54.222680 | orchestrator | Saturday 14 February 2026 05:16:04 +0000 (0:00:07.158) 0:09:20.807 ***** 2026-02-14 05:17:54.222690 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.222699 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.222709 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.222718 | orchestrator | 2026-02-14 05:17:54.222728 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-14 05:17:54.222737 | orchestrator | Saturday 14 February 2026 05:16:12 +0000 (0:00:07.511) 0:09:28.318 ***** 2026-02-14 05:17:54.222747 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.222756 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.222765 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.222775 | orchestrator | 2026-02-14 05:17:54.222784 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-14 05:17:54.222796 | orchestrator | Saturday 14 February 2026 05:16:19 +0000 (0:00:07.148) 0:09:35.467 ***** 2026-02-14 05:17:54.222807 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.222819 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.222829 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.222840 | orchestrator | 2026-02-14 05:17:54.222852 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-14 05:17:54.222863 | orchestrator | Saturday 14 February 2026 05:16:27 +0000 (0:00:07.815) 0:09:43.282 ***** 2026-02-14 05:17:54.222873 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.222884 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.222895 | orchestrator | 2026-02-14 05:17:54.222906 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-14 05:17:54.222917 | orchestrator | Saturday 14 February 2026 05:16:31 +0000 (0:00:03.796) 0:09:47.078 ***** 2026-02-14 05:17:54.222929 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.222941 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.222953 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.222964 | orchestrator | 2026-02-14 05:17:54.222992 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-14 05:17:54.223003 | orchestrator | Saturday 14 February 2026 05:16:44 +0000 (0:00:13.458) 0:10:00.537 ***** 2026-02-14 05:17:54.223015 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.223026 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.223037 | orchestrator | 2026-02-14 05:17:54.223048 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-14 05:17:54.223078 | orchestrator | Saturday 14 February 2026 05:16:48 +0000 (0:00:03.730) 0:10:04.268 ***** 2026-02-14 05:17:54.223089 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:17:54.223100 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:17:54.223111 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:17:54.223122 | orchestrator | 2026-02-14 05:17:54.223133 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-14 05:17:54.223152 | orchestrator | Saturday 14 February 2026 05:16:55 +0000 (0:00:07.089) 0:10:11.357 ***** 2026-02-14 05:17:54.223162 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223171 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223180 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223190 | orchestrator | 2026-02-14 05:17:54.223199 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-14 05:17:54.223209 | orchestrator | Saturday 14 February 2026 05:17:02 +0000 (0:00:06.849) 0:10:18.207 ***** 2026-02-14 05:17:54.223218 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223228 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223237 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223246 | orchestrator | 2026-02-14 05:17:54.223256 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-14 05:17:54.223266 | orchestrator | Saturday 14 February 2026 05:17:09 +0000 (0:00:06.890) 0:10:25.098 ***** 2026-02-14 05:17:54.223275 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223284 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223294 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223303 | orchestrator | 2026-02-14 05:17:54.223313 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-14 05:17:54.223322 | orchestrator | Saturday 14 February 2026 05:17:16 +0000 (0:00:06.876) 0:10:31.974 ***** 2026-02-14 05:17:54.223332 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223341 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223350 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223360 | orchestrator | 2026-02-14 05:17:54.223374 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-14 05:17:54.223384 | orchestrator | Saturday 14 February 2026 05:17:23 +0000 (0:00:07.377) 0:10:39.352 ***** 2026-02-14 05:17:54.223393 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.223403 | orchestrator | 2026-02-14 05:17:54.223412 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-14 05:17:54.223422 | orchestrator | Saturday 14 February 2026 05:17:27 +0000 (0:00:03.602) 0:10:42.955 ***** 2026-02-14 05:17:54.223431 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223441 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223450 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223459 | orchestrator | 2026-02-14 05:17:54.223469 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-14 05:17:54.223478 | orchestrator | Saturday 14 February 2026 05:17:38 +0000 (0:00:11.820) 0:10:54.775 ***** 2026-02-14 05:17:54.223488 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.223497 | orchestrator | 2026-02-14 05:17:54.223507 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-14 05:17:54.223516 | orchestrator | Saturday 14 February 2026 05:17:42 +0000 (0:00:03.563) 0:10:58.338 ***** 2026-02-14 05:17:54.223526 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:17:54.223535 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:17:54.223545 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:17:54.223554 | orchestrator | 2026-02-14 05:17:54.223564 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-14 05:17:54.223573 | orchestrator | Saturday 14 February 2026 05:17:49 +0000 (0:00:06.904) 0:11:05.242 ***** 2026-02-14 05:17:54.223583 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.223592 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.223601 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.223611 | orchestrator | 2026-02-14 05:17:54.223620 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-14 05:17:54.223630 | orchestrator | Saturday 14 February 2026 05:17:51 +0000 (0:00:02.059) 0:11:07.302 ***** 2026-02-14 05:17:54.223639 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:17:54.223649 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:17:54.223658 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:17:54.223673 | orchestrator | 2026-02-14 05:17:54.223683 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:17:54.223693 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-14 05:17:54.223703 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-14 05:17:54.223713 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-14 05:17:54.223722 | orchestrator | 2026-02-14 05:17:54.223732 | orchestrator | 2026-02-14 05:17:54.223741 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:17:54.223751 | orchestrator | Saturday 14 February 2026 05:17:54 +0000 (0:00:02.785) 0:11:10.088 ***** 2026-02-14 05:17:54.223760 | orchestrator | =============================================================================== 2026-02-14 05:17:54.223770 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.46s 2026-02-14 05:17:54.223779 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 11.82s 2026-02-14 05:17:54.223789 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.47s 2026-02-14 05:17:54.223804 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.82s 2026-02-14 05:17:55.258359 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.51s 2026-02-14 05:17:55.258475 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.45s 2026-02-14 05:17:55.258498 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.38s 2026-02-14 05:17:55.258514 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.16s 2026-02-14 05:17:55.258525 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.15s 2026-02-14 05:17:55.258536 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.09s 2026-02-14 05:17:55.258547 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.90s 2026-02-14 05:17:55.258557 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.89s 2026-02-14 05:17:55.258568 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.88s 2026-02-14 05:17:55.258579 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.87s 2026-02-14 05:17:55.258589 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.85s 2026-02-14 05:17:55.258600 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.98s 2026-02-14 05:17:55.258610 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.82s 2026-02-14 05:17:55.258621 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.81s 2026-02-14 05:17:55.258632 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.69s 2026-02-14 05:17:55.258643 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.34s 2026-02-14 05:17:55.601272 | orchestrator | + osism apply -a upgrade opensearch 2026-02-14 05:17:57.697217 | orchestrator | 2026-02-14 05:17:57 | INFO  | Task eaceb5a1-0e92-44b2-baf2-abcbd5c8f09a (opensearch) was prepared for execution. 2026-02-14 05:17:57.697349 | orchestrator | 2026-02-14 05:17:57 | INFO  | It takes a moment until task eaceb5a1-0e92-44b2-baf2-abcbd5c8f09a (opensearch) has been started and output is visible here. 2026-02-14 05:18:09.447967 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-14 05:18:09.448154 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-14 05:18:09.448208 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-14 05:18:09.448258 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-14 05:18:09.448298 | orchestrator | 2026-02-14 05:18:09.448319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:18:09.448336 | orchestrator | 2026-02-14 05:18:09.448356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:18:09.448375 | orchestrator | Saturday 14 February 2026 05:18:03 +0000 (0:00:01.086) 0:00:01.086 ***** 2026-02-14 05:18:09.448395 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:18:09.448415 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:18:09.448434 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:18:09.448453 | orchestrator | 2026-02-14 05:18:09.448472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:18:09.448491 | orchestrator | Saturday 14 February 2026 05:18:04 +0000 (0:00:01.194) 0:00:02.281 ***** 2026-02-14 05:18:09.448509 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-14 05:18:09.448528 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-14 05:18:09.448546 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-14 05:18:09.448565 | orchestrator | 2026-02-14 05:18:09.448585 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-14 05:18:09.448604 | orchestrator | 2026-02-14 05:18:09.448624 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 05:18:09.448644 | orchestrator | Saturday 14 February 2026 05:18:05 +0000 (0:00:00.956) 0:00:03.237 ***** 2026-02-14 05:18:09.448663 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:18:09.448682 | orchestrator | 2026-02-14 05:18:09.448700 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-14 05:18:09.448720 | orchestrator | Saturday 14 February 2026 05:18:06 +0000 (0:00:01.196) 0:00:04.434 ***** 2026-02-14 05:18:09.448738 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 05:18:09.448756 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 05:18:09.448776 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-14 05:18:09.448794 | orchestrator | 2026-02-14 05:18:09.448814 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-14 05:18:09.448833 | orchestrator | Saturday 14 February 2026 05:18:07 +0000 (0:00:01.315) 0:00:05.749 ***** 2026-02-14 05:18:09.448856 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:09.448881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:09.448969 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:09.448992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:09.449014 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:09.449055 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:13.894118 | orchestrator | 2026-02-14 05:18:13.894239 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 05:18:13.894256 | orchestrator | Saturday 14 February 2026 05:18:09 +0000 (0:00:01.481) 0:00:07.231 ***** 2026-02-14 05:18:13.894269 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:18:13.894281 | orchestrator | 2026-02-14 05:18:13.894292 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-14 05:18:13.894303 | orchestrator | Saturday 14 February 2026 05:18:10 +0000 (0:00:00.914) 0:00:08.146 ***** 2026-02-14 05:18:13.894317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:13.894331 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:13.894343 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:13.894418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:13.894435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:13.894448 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:13.894460 | orchestrator | 2026-02-14 05:18:13.894472 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-14 05:18:13.894492 | orchestrator | Saturday 14 February 2026 05:18:13 +0000 (0:00:02.689) 0:00:10.835 ***** 2026-02-14 05:18:13.894503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:13.894530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:14.992533 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:18:14.992675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:14.992707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:14.992780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:14.992807 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:18:14.992858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:14.992884 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:18:14.992903 | orchestrator | 2026-02-14 05:18:14.992923 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-14 05:18:14.992944 | orchestrator | Saturday 14 February 2026 05:18:13 +0000 (0:00:00.855) 0:00:11.691 ***** 2026-02-14 05:18:14.992965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:14.992986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:14.993020 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:18:14.993050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:14.993113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:18:17.872692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:17.872860 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:18:17.872893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:18:17.872914 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:18:17.872933 | orchestrator | 2026-02-14 05:18:17.872953 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-14 05:18:17.872975 | orchestrator | Saturday 14 February 2026 05:18:14 +0000 (0:00:01.093) 0:00:12.784 ***** 2026-02-14 05:18:17.873014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:17.873065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:17.873138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:17.873174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:17.873205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:17.873243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:26.841454 | orchestrator | 2026-02-14 05:18:26.841544 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-14 05:18:26.841569 | orchestrator | Saturday 14 February 2026 05:18:17 +0000 (0:00:02.881) 0:00:15.666 ***** 2026-02-14 05:18:26.841576 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:18:26.841583 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:18:26.841589 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:18:26.841594 | orchestrator | 2026-02-14 05:18:26.841600 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-14 05:18:26.841606 | orchestrator | Saturday 14 February 2026 05:18:20 +0000 (0:00:02.500) 0:00:18.167 ***** 2026-02-14 05:18:26.841611 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:18:26.841616 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:18:26.841622 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:18:26.841627 | orchestrator | 2026-02-14 05:18:26.841632 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-14 05:18:26.841638 | orchestrator | Saturday 14 February 2026 05:18:22 +0000 (0:00:01.966) 0:00:20.134 ***** 2026-02-14 05:18:26.841645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:26.841663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:26.841669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-14 05:18:26.841689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:26.841702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:26.841711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-14 05:18:26.841718 | orchestrator | 2026-02-14 05:18:26.841724 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-14 05:18:26.841730 | orchestrator | Saturday 14 February 2026 05:18:25 +0000 (0:00:02.782) 0:00:22.916 ***** 2026-02-14 05:18:26.841736 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:18:26.841742 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:18:26.841748 | orchestrator | } 2026-02-14 05:18:26.841754 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:18:26.841759 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:18:26.841764 | orchestrator | } 2026-02-14 05:18:26.841770 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:18:26.841780 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:18:26.841786 | orchestrator | } 2026-02-14 05:18:26.841791 | orchestrator | 2026-02-14 05:18:26.841797 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:18:26.841802 | orchestrator | Saturday 14 February 2026 05:18:25 +0000 (0:00:00.402) 0:00:23.319 ***** 2026-02-14 05:18:26.841813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:21:29.570571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:21:29.570693 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:21:29.570728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:21:29.570744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:21:29.570780 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:21:29.570811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-14 05:21:29.570824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-14 05:21:29.570836 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:21:29.570847 | orchestrator | 2026-02-14 05:21:29.570859 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 05:21:29.570872 | orchestrator | Saturday 14 February 2026 05:18:26 +0000 (0:00:01.319) 0:00:24.638 ***** 2026-02-14 05:21:29.570883 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:21:29.570893 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-14 05:21:29.570905 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-14 05:21:29.570931 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:21:29.570942 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:21:29.570953 | orchestrator | 2026-02-14 05:21:29.570963 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 05:21:29.570974 | orchestrator | Saturday 14 February 2026 05:18:27 +0000 (0:00:00.588) 0:00:25.227 ***** 2026-02-14 05:21:29.570985 | orchestrator | 2026-02-14 05:21:29.570996 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 05:21:29.571017 | orchestrator | Saturday 14 February 2026 05:18:27 +0000 (0:00:00.075) 0:00:25.303 ***** 2026-02-14 05:21:29.571028 | orchestrator | 2026-02-14 05:21:29.571039 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-14 05:21:29.571051 | orchestrator | Saturday 14 February 2026 05:18:27 +0000 (0:00:00.078) 0:00:25.381 ***** 2026-02-14 05:21:29.571062 | orchestrator | 2026-02-14 05:21:29.571073 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-14 05:21:29.571084 | orchestrator | Saturday 14 February 2026 05:18:27 +0000 (0:00:00.075) 0:00:25.457 ***** 2026-02-14 05:21:29.571095 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:21:29.571108 | orchestrator | 2026-02-14 05:21:29.571121 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-14 05:21:29.571133 | orchestrator | Saturday 14 February 2026 05:18:30 +0000 (0:00:02.511) 0:00:27.968 ***** 2026-02-14 05:21:29.571146 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:21:29.571158 | orchestrator | 2026-02-14 05:21:29.571170 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-14 05:21:29.571182 | orchestrator | Saturday 14 February 2026 05:18:37 +0000 (0:00:07.240) 0:00:35.209 ***** 2026-02-14 05:21:29.571194 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:21:29.571232 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:21:29.571244 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:21:29.571257 | orchestrator | 2026-02-14 05:21:29.571269 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-14 05:21:29.571282 | orchestrator | Saturday 14 February 2026 05:19:48 +0000 (0:01:11.278) 0:01:46.488 ***** 2026-02-14 05:21:29.571294 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:21:29.571307 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:21:29.571320 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:21:29.571332 | orchestrator | 2026-02-14 05:21:29.571344 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-14 05:21:29.571357 | orchestrator | Saturday 14 February 2026 05:21:23 +0000 (0:01:35.230) 0:03:21.718 ***** 2026-02-14 05:21:29.571369 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:21:29.571382 | orchestrator | 2026-02-14 05:21:29.571395 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-14 05:21:29.571407 | orchestrator | Saturday 14 February 2026 05:21:24 +0000 (0:00:01.031) 0:03:22.750 ***** 2026-02-14 05:21:29.571420 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:21:29.571433 | orchestrator | 2026-02-14 05:21:29.571444 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-14 05:21:29.571455 | orchestrator | Saturday 14 February 2026 05:21:27 +0000 (0:00:02.264) 0:03:25.015 ***** 2026-02-14 05:21:29.571465 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:21:29.571476 | orchestrator | 2026-02-14 05:21:29.571494 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-14 05:21:31.771357 | orchestrator | Saturday 14 February 2026 05:21:29 +0000 (0:00:02.341) 0:03:27.357 ***** 2026-02-14 05:21:31.771462 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:21:31.771479 | orchestrator | 2026-02-14 05:21:31.771492 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-14 05:21:31.771503 | orchestrator | Saturday 14 February 2026 05:21:29 +0000 (0:00:00.216) 0:03:27.573 ***** 2026-02-14 05:21:31.771514 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:21:31.771525 | orchestrator | 2026-02-14 05:21:31.771536 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:21:31.771548 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:21:31.771560 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:21:31.771571 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:21:31.771608 | orchestrator | 2026-02-14 05:21:31.771620 | orchestrator | 2026-02-14 05:21:31.771630 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:21:31.771657 | orchestrator | Saturday 14 February 2026 05:21:31 +0000 (0:00:01.621) 0:03:29.195 ***** 2026-02-14 05:21:31.771679 | orchestrator | =============================================================================== 2026-02-14 05:21:31.771690 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 95.23s 2026-02-14 05:21:31.771702 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.28s 2026-02-14 05:21:31.771722 | orchestrator | opensearch : Perform a flush -------------------------------------------- 7.24s 2026-02-14 05:21:31.771750 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.88s 2026-02-14 05:21:31.771771 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.78s 2026-02-14 05:21:31.771790 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.69s 2026-02-14 05:21:31.771809 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.51s 2026-02-14 05:21:31.771828 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.50s 2026-02-14 05:21:31.771866 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2026-02-14 05:21:31.771886 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.26s 2026-02-14 05:21:31.771905 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.97s 2026-02-14 05:21:31.771926 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.62s 2026-02-14 05:21:31.771943 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.48s 2026-02-14 05:21:31.771963 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.32s 2026-02-14 05:21:31.771984 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.32s 2026-02-14 05:21:31.772004 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.20s 2026-02-14 05:21:31.772023 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.19s 2026-02-14 05:21:31.772043 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2026-02-14 05:21:31.772063 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.03s 2026-02-14 05:21:31.772085 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-02-14 05:21:32.080298 | orchestrator | + osism apply -a upgrade memcached 2026-02-14 05:21:34.173303 | orchestrator | 2026-02-14 05:21:34 | INFO  | Task ea97de7c-0853-4283-bbbb-bf12fa7e6d1b (memcached) was prepared for execution. 2026-02-14 05:21:34.173389 | orchestrator | 2026-02-14 05:21:34 | INFO  | It takes a moment until task ea97de7c-0853-4283-bbbb-bf12fa7e6d1b (memcached) has been started and output is visible here. 2026-02-14 05:22:08.362483 | orchestrator | 2026-02-14 05:22:08.362619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:22:08.362648 | orchestrator | 2026-02-14 05:22:08.362661 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:22:08.362673 | orchestrator | Saturday 14 February 2026 05:21:39 +0000 (0:00:01.388) 0:00:01.388 ***** 2026-02-14 05:22:08.362684 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:22:08.362696 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:22:08.362707 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:22:08.362718 | orchestrator | 2026-02-14 05:22:08.362729 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:22:08.362741 | orchestrator | Saturday 14 February 2026 05:21:41 +0000 (0:00:01.942) 0:00:03.331 ***** 2026-02-14 05:22:08.362752 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-14 05:22:08.362790 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-14 05:22:08.362801 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-14 05:22:08.362812 | orchestrator | 2026-02-14 05:22:08.362823 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-14 05:22:08.362833 | orchestrator | 2026-02-14 05:22:08.362844 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-14 05:22:08.362855 | orchestrator | Saturday 14 February 2026 05:21:44 +0000 (0:00:02.903) 0:00:06.235 ***** 2026-02-14 05:22:08.362866 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:22:08.362877 | orchestrator | 2026-02-14 05:22:08.362888 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-14 05:22:08.362899 | orchestrator | Saturday 14 February 2026 05:21:47 +0000 (0:00:02.574) 0:00:08.809 ***** 2026-02-14 05:22:08.362910 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-14 05:22:08.362921 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-14 05:22:08.362932 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-14 05:22:08.362943 | orchestrator | 2026-02-14 05:22:08.362954 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-14 05:22:08.362964 | orchestrator | Saturday 14 February 2026 05:21:49 +0000 (0:00:01.994) 0:00:10.804 ***** 2026-02-14 05:22:08.362975 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-14 05:22:08.362986 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-14 05:22:08.362999 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-14 05:22:08.363011 | orchestrator | 2026-02-14 05:22:08.363023 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-14 05:22:08.363036 | orchestrator | Saturday 14 February 2026 05:21:51 +0000 (0:00:02.586) 0:00:13.390 ***** 2026-02-14 05:22:08.363051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:22:08.363083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:22:08.363119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-14 05:22:08.363143 | orchestrator | 2026-02-14 05:22:08.363155 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-14 05:22:08.363168 | orchestrator | Saturday 14 February 2026 05:21:54 +0000 (0:00:02.302) 0:00:15.692 ***** 2026-02-14 05:22:08.363182 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:22:08.363194 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:08.363207 | orchestrator | } 2026-02-14 05:22:08.363220 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:22:08.363263 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:08.363276 | orchestrator | } 2026-02-14 05:22:08.363289 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:22:08.363301 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:08.363313 | orchestrator | } 2026-02-14 05:22:08.363326 | orchestrator | 2026-02-14 05:22:08.363350 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:22:08.363371 | orchestrator | Saturday 14 February 2026 05:21:55 +0000 (0:00:01.384) 0:00:17.076 ***** 2026-02-14 05:22:08.363383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:22:08.363395 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:22:08.363407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:22:08.363418 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:22:08.363435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-14 05:22:08.363447 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:22:08.363458 | orchestrator | 2026-02-14 05:22:08.363469 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-14 05:22:08.363480 | orchestrator | Saturday 14 February 2026 05:21:57 +0000 (0:00:02.019) 0:00:19.096 ***** 2026-02-14 05:22:08.363498 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:22:08.363509 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:22:08.363520 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:22:08.363531 | orchestrator | 2026-02-14 05:22:08.363542 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:22:08.363553 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:08.363565 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:08.363576 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:08.363587 | orchestrator | 2026-02-14 05:22:08.363598 | orchestrator | 2026-02-14 05:22:08.363609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:22:08.363628 | orchestrator | Saturday 14 February 2026 05:22:08 +0000 (0:00:10.769) 0:00:29.866 ***** 2026-02-14 05:22:08.711047 | orchestrator | =============================================================================== 2026-02-14 05:22:08.711144 | orchestrator | memcached : Restart memcached container -------------------------------- 10.77s 2026-02-14 05:22:08.711160 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.90s 2026-02-14 05:22:08.711172 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.59s 2026-02-14 05:22:08.711184 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.57s 2026-02-14 05:22:08.711194 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.30s 2026-02-14 05:22:08.711206 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-02-14 05:22:08.711216 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.99s 2026-02-14 05:22:08.711255 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.94s 2026-02-14 05:22:08.711266 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.38s 2026-02-14 05:22:09.039038 | orchestrator | + osism apply -a upgrade redis 2026-02-14 05:22:11.143275 | orchestrator | 2026-02-14 05:22:11 | INFO  | Task 8e84612b-3fd4-40e4-93d0-31ee9052c592 (redis) was prepared for execution. 2026-02-14 05:22:11.143366 | orchestrator | 2026-02-14 05:22:11 | INFO  | It takes a moment until task 8e84612b-3fd4-40e4-93d0-31ee9052c592 (redis) has been started and output is visible here. 2026-02-14 05:22:23.213375 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-14 05:22:23.213496 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-14 05:22:23.213523 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-14 05:22:23.213534 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-14 05:22:23.213557 | orchestrator | 2026-02-14 05:22:23.213570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:22:23.213581 | orchestrator | 2026-02-14 05:22:23.213592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:22:23.213603 | orchestrator | Saturday 14 February 2026 05:22:16 +0000 (0:00:01.029) 0:00:01.029 ***** 2026-02-14 05:22:23.213615 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:22:23.213626 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:22:23.213637 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:22:23.213648 | orchestrator | 2026-02-14 05:22:23.213659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:22:23.213691 | orchestrator | Saturday 14 February 2026 05:22:17 +0000 (0:00:01.105) 0:00:02.134 ***** 2026-02-14 05:22:23.213702 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-14 05:22:23.213713 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-14 05:22:23.213724 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-14 05:22:23.213735 | orchestrator | 2026-02-14 05:22:23.213745 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-14 05:22:23.213756 | orchestrator | 2026-02-14 05:22:23.213767 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-14 05:22:23.213778 | orchestrator | Saturday 14 February 2026 05:22:18 +0000 (0:00:00.959) 0:00:03.094 ***** 2026-02-14 05:22:23.213789 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:22:23.213800 | orchestrator | 2026-02-14 05:22:23.213811 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-14 05:22:23.213837 | orchestrator | Saturday 14 February 2026 05:22:19 +0000 (0:00:01.111) 0:00:04.206 ***** 2026-02-14 05:22:23.213852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213872 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213901 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213957 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.213970 | orchestrator | 2026-02-14 05:22:23.213983 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-14 05:22:23.213996 | orchestrator | Saturday 14 February 2026 05:22:21 +0000 (0:00:01.508) 0:00:05.714 ***** 2026-02-14 05:22:23.214014 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.214089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.214103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.214116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:23.214145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151147 | orchestrator | 2026-02-14 05:22:28.151164 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-14 05:22:28.151176 | orchestrator | Saturday 14 February 2026 05:22:23 +0000 (0:00:02.121) 0:00:07.836 ***** 2026-02-14 05:22:28.151203 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151296 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151316 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151401 | orchestrator | 2026-02-14 05:22:28.151411 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-14 05:22:28.151421 | orchestrator | Saturday 14 February 2026 05:22:26 +0000 (0:00:02.859) 0:00:10.695 ***** 2026-02-14 05:22:28.151432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:28.151536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-14 05:22:50.395888 | orchestrator | 2026-02-14 05:22:50.396059 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-14 05:22:50.396079 | orchestrator | Saturday 14 February 2026 05:22:28 +0000 (0:00:02.080) 0:00:12.775 ***** 2026-02-14 05:22:50.396092 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:22:50.396105 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:50.396116 | orchestrator | } 2026-02-14 05:22:50.396127 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:22:50.396138 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:50.396149 | orchestrator | } 2026-02-14 05:22:50.396160 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:22:50.396188 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:22:50.396200 | orchestrator | } 2026-02-14 05:22:50.396211 | orchestrator | 2026-02-14 05:22:50.396222 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:22:50.396233 | orchestrator | Saturday 14 February 2026 05:22:28 +0000 (0:00:00.573) 0:00:13.349 ***** 2026-02-14 05:22:50.396267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396296 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-14 05:22:50.396331 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-14 05:22:50.396354 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:22:50.396365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396389 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:22:50.396421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-14 05:22:50.396454 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:22:50.396467 | orchestrator | 2026-02-14 05:22:50.396479 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 05:22:50.396493 | orchestrator | Saturday 14 February 2026 05:22:29 +0000 (0:00:01.067) 0:00:14.416 ***** 2026-02-14 05:22:50.396506 | orchestrator | 2026-02-14 05:22:50.396518 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 05:22:50.396530 | orchestrator | Saturday 14 February 2026 05:22:29 +0000 (0:00:00.080) 0:00:14.497 ***** 2026-02-14 05:22:50.396542 | orchestrator | 2026-02-14 05:22:50.396553 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-14 05:22:50.396564 | orchestrator | Saturday 14 February 2026 05:22:29 +0000 (0:00:00.072) 0:00:14.569 ***** 2026-02-14 05:22:50.396575 | orchestrator | 2026-02-14 05:22:50.396586 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-14 05:22:50.396605 | orchestrator | Saturday 14 February 2026 05:22:30 +0000 (0:00:00.083) 0:00:14.653 ***** 2026-02-14 05:22:50.396616 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:22:50.396626 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:22:50.396638 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:22:50.396648 | orchestrator | 2026-02-14 05:22:50.396659 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-14 05:22:50.396670 | orchestrator | Saturday 14 February 2026 05:22:39 +0000 (0:00:09.584) 0:00:24.238 ***** 2026-02-14 05:22:50.396681 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:22:50.396691 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:22:50.396702 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:22:50.396713 | orchestrator | 2026-02-14 05:22:50.396723 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:22:50.396735 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:50.396748 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:50.396758 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:22:50.396769 | orchestrator | 2026-02-14 05:22:50.396780 | orchestrator | 2026-02-14 05:22:50.396791 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:22:50.396802 | orchestrator | Saturday 14 February 2026 05:22:49 +0000 (0:00:10.321) 0:00:34.559 ***** 2026-02-14 05:22:50.396812 | orchestrator | =============================================================================== 2026-02-14 05:22:50.396823 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.32s 2026-02-14 05:22:50.396834 | orchestrator | redis : Restart redis container ----------------------------------------- 9.58s 2026-02-14 05:22:50.396845 | orchestrator | redis : Copying over redis config files --------------------------------- 2.86s 2026-02-14 05:22:50.396855 | orchestrator | redis : Copying over default config.json files -------------------------- 2.12s 2026-02-14 05:22:50.396866 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.08s 2026-02-14 05:22:50.396877 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.51s 2026-02-14 05:22:50.396888 | orchestrator | redis : include_tasks --------------------------------------------------- 1.11s 2026-02-14 05:22:50.396898 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2026-02-14 05:22:50.396909 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.07s 2026-02-14 05:22:50.396920 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-02-14 05:22:50.396930 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.57s 2026-02-14 05:22:50.396941 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-02-14 05:22:50.733183 | orchestrator | + osism apply -a upgrade mariadb 2026-02-14 05:22:52.825821 | orchestrator | 2026-02-14 05:22:52 | INFO  | Task 454d6cc7-2013-4b4e-bb85-3d179eda7456 (mariadb) was prepared for execution. 2026-02-14 05:22:52.826146 | orchestrator | 2026-02-14 05:22:52 | INFO  | It takes a moment until task 454d6cc7-2013-4b4e-bb85-3d179eda7456 (mariadb) has been started and output is visible here. 2026-02-14 05:23:20.183395 | orchestrator | 2026-02-14 05:23:20.183495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:23:20.183507 | orchestrator | 2026-02-14 05:23:20.183515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:23:20.183523 | orchestrator | Saturday 14 February 2026 05:22:58 +0000 (0:00:01.672) 0:00:01.672 ***** 2026-02-14 05:23:20.183531 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:23:20.183539 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:23:20.183564 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:23:20.183572 | orchestrator | 2026-02-14 05:23:20.183579 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:23:20.183587 | orchestrator | Saturday 14 February 2026 05:23:01 +0000 (0:00:02.689) 0:00:04.362 ***** 2026-02-14 05:23:20.183595 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-14 05:23:20.183615 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-14 05:23:20.183622 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-14 05:23:20.183629 | orchestrator | 2026-02-14 05:23:20.183637 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-14 05:23:20.183644 | orchestrator | 2026-02-14 05:23:20.183651 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-14 05:23:20.183658 | orchestrator | Saturday 14 February 2026 05:23:04 +0000 (0:00:03.182) 0:00:07.544 ***** 2026-02-14 05:23:20.183665 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:23:20.183673 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 05:23:20.183680 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 05:23:20.183687 | orchestrator | 2026-02-14 05:23:20.183694 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 05:23:20.183701 | orchestrator | Saturday 14 February 2026 05:23:06 +0000 (0:00:01.689) 0:00:09.234 ***** 2026-02-14 05:23:20.183709 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:23:20.183727 | orchestrator | 2026-02-14 05:23:20.183735 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-14 05:23:20.183742 | orchestrator | Saturday 14 February 2026 05:23:08 +0000 (0:00:01.961) 0:00:11.196 ***** 2026-02-14 05:23:20.183754 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:20.183788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:20.183804 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:20.183812 | orchestrator | 2026-02-14 05:23:20.183820 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-14 05:23:20.183827 | orchestrator | Saturday 14 February 2026 05:23:11 +0000 (0:00:03.641) 0:00:14.837 ***** 2026-02-14 05:23:20.183834 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:20.183842 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:20.183849 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:23:20.183856 | orchestrator | 2026-02-14 05:23:20.183863 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-14 05:23:20.183870 | orchestrator | Saturday 14 February 2026 05:23:13 +0000 (0:00:01.478) 0:00:16.316 ***** 2026-02-14 05:23:20.183882 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:20.183890 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:20.183897 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:23:20.183906 | orchestrator | 2026-02-14 05:23:20.183914 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-14 05:23:20.183923 | orchestrator | Saturday 14 February 2026 05:23:15 +0000 (0:00:02.170) 0:00:18.486 ***** 2026-02-14 05:23:20.183942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:32.846746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:32.846911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:32.846932 | orchestrator | 2026-02-14 05:23:32.846946 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-14 05:23:32.846959 | orchestrator | Saturday 14 February 2026 05:23:20 +0000 (0:00:04.538) 0:00:23.025 ***** 2026-02-14 05:23:32.846970 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:32.846982 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:32.846993 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:23:32.847004 | orchestrator | 2026-02-14 05:23:32.847015 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-14 05:23:32.847046 | orchestrator | Saturday 14 February 2026 05:23:22 +0000 (0:00:02.131) 0:00:25.157 ***** 2026-02-14 05:23:32.847058 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:23:32.847069 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:23:32.847079 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:23:32.847090 | orchestrator | 2026-02-14 05:23:32.847101 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 05:23:32.847111 | orchestrator | Saturday 14 February 2026 05:23:27 +0000 (0:00:04.825) 0:00:29.982 ***** 2026-02-14 05:23:32.847123 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:23:32.847134 | orchestrator | 2026-02-14 05:23:32.847145 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-14 05:23:32.847156 | orchestrator | Saturday 14 February 2026 05:23:29 +0000 (0:00:01.958) 0:00:31.941 ***** 2026-02-14 05:23:32.847168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:32.847188 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:23:32.847212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:40.723104 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:40.723219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:40.723338 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:40.723362 | orchestrator | 2026-02-14 05:23:40.723382 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-14 05:23:40.723401 | orchestrator | Saturday 14 February 2026 05:23:32 +0000 (0:00:03.747) 0:00:35.688 ***** 2026-02-14 05:23:40.723444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:40.723465 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:23:40.723501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:40.723524 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:40.723542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:40.723554 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:40.723565 | orchestrator | 2026-02-14 05:23:40.723576 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-14 05:23:40.723587 | orchestrator | Saturday 14 February 2026 05:23:36 +0000 (0:00:03.536) 0:00:39.224 ***** 2026-02-14 05:23:40.723609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:44.971615 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:23:44.971716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:44.971727 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:23:44.971733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:23:44.971754 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:23:44.971759 | orchestrator | 2026-02-14 05:23:44.971765 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-14 05:23:44.971771 | orchestrator | Saturday 14 February 2026 05:23:40 +0000 (0:00:04.342) 0:00:43.567 ***** 2026-02-14 05:23:44.971794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:44.971801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:23:44.971817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-14 05:24:00.683770 | orchestrator | 2026-02-14 05:24:00.683924 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-14 05:24:00.683941 | orchestrator | Saturday 14 February 2026 05:23:44 +0000 (0:00:04.249) 0:00:47.817 ***** 2026-02-14 05:24:00.683954 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:24:00.683967 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:24:00.683978 | orchestrator | } 2026-02-14 05:24:00.684010 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:24:00.684022 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:24:00.684033 | orchestrator | } 2026-02-14 05:24:00.684044 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:24:00.684054 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:24:00.684065 | orchestrator | } 2026-02-14 05:24:00.684076 | orchestrator | 2026-02-14 05:24:00.684088 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:24:00.684099 | orchestrator | Saturday 14 February 2026 05:23:46 +0000 (0:00:01.389) 0:00:49.207 ***** 2026-02-14 05:24:00.684115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:00.684161 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:00.684216 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:00.684335 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684351 | orchestrator | 2026-02-14 05:24:00.684364 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-14 05:24:00.684378 | orchestrator | Saturday 14 February 2026 05:23:50 +0000 (0:00:04.233) 0:00:53.440 ***** 2026-02-14 05:24:00.684390 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684403 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684416 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684429 | orchestrator | 2026-02-14 05:24:00.684441 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-14 05:24:00.684453 | orchestrator | Saturday 14 February 2026 05:23:51 +0000 (0:00:01.395) 0:00:54.836 ***** 2026-02-14 05:24:00.684466 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684478 | orchestrator | 2026-02-14 05:24:00.684490 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-14 05:24:00.684503 | orchestrator | Saturday 14 February 2026 05:23:53 +0000 (0:00:01.149) 0:00:55.985 ***** 2026-02-14 05:24:00.684515 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684528 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684540 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684552 | orchestrator | 2026-02-14 05:24:00.684564 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-14 05:24:00.684576 | orchestrator | Saturday 14 February 2026 05:23:54 +0000 (0:00:01.463) 0:00:57.449 ***** 2026-02-14 05:24:00.684589 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684602 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684613 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684624 | orchestrator | 2026-02-14 05:24:00.684634 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-14 05:24:00.684645 | orchestrator | Saturday 14 February 2026 05:23:56 +0000 (0:00:01.747) 0:00:59.197 ***** 2026-02-14 05:24:00.684656 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684666 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684677 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684688 | orchestrator | 2026-02-14 05:24:00.684698 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-14 05:24:00.684709 | orchestrator | Saturday 14 February 2026 05:23:57 +0000 (0:00:01.464) 0:01:00.661 ***** 2026-02-14 05:24:00.684720 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684731 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684741 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684752 | orchestrator | 2026-02-14 05:24:00.684763 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-14 05:24:00.684782 | orchestrator | Saturday 14 February 2026 05:23:59 +0000 (0:00:01.409) 0:01:02.071 ***** 2026-02-14 05:24:00.684793 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:00.684804 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:00.684815 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:00.684825 | orchestrator | 2026-02-14 05:24:00.684844 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-14 05:24:18.793273 | orchestrator | Saturday 14 February 2026 05:24:00 +0000 (0:00:01.452) 0:01:03.523 ***** 2026-02-14 05:24:18.793486 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.793513 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.793530 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.793548 | orchestrator | 2026-02-14 05:24:18.793592 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-14 05:24:18.793609 | orchestrator | Saturday 14 February 2026 05:24:02 +0000 (0:00:01.615) 0:01:05.139 ***** 2026-02-14 05:24:18.793626 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:24:18.793643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:24:18.793660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:24:18.793676 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.793693 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:24:18.793709 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:24:18.793725 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:24:18.793742 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.793759 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:24:18.793777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:24:18.793794 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:24:18.793812 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.793829 | orchestrator | 2026-02-14 05:24:18.793847 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-14 05:24:18.793865 | orchestrator | Saturday 14 February 2026 05:24:03 +0000 (0:00:01.428) 0:01:06.567 ***** 2026-02-14 05:24:18.793882 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.793897 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.793912 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.793925 | orchestrator | 2026-02-14 05:24:18.793940 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-14 05:24:18.793955 | orchestrator | Saturday 14 February 2026 05:24:05 +0000 (0:00:01.406) 0:01:07.974 ***** 2026-02-14 05:24:18.793970 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.793983 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.793996 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794010 | orchestrator | 2026-02-14 05:24:18.794101 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-14 05:24:18.794116 | orchestrator | Saturday 14 February 2026 05:24:06 +0000 (0:00:01.449) 0:01:09.423 ***** 2026-02-14 05:24:18.794141 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794154 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794168 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794182 | orchestrator | 2026-02-14 05:24:18.794196 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-14 05:24:18.794210 | orchestrator | Saturday 14 February 2026 05:24:08 +0000 (0:00:01.502) 0:01:10.926 ***** 2026-02-14 05:24:18.794224 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794237 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794250 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794264 | orchestrator | 2026-02-14 05:24:18.794277 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-14 05:24:18.794312 | orchestrator | Saturday 14 February 2026 05:24:09 +0000 (0:00:01.362) 0:01:12.288 ***** 2026-02-14 05:24:18.794357 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794370 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794383 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794397 | orchestrator | 2026-02-14 05:24:18.794411 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-14 05:24:18.794423 | orchestrator | Saturday 14 February 2026 05:24:10 +0000 (0:00:01.413) 0:01:13.702 ***** 2026-02-14 05:24:18.794437 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794450 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794462 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794476 | orchestrator | 2026-02-14 05:24:18.794489 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-14 05:24:18.794502 | orchestrator | Saturday 14 February 2026 05:24:12 +0000 (0:00:01.612) 0:01:15.314 ***** 2026-02-14 05:24:18.794514 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794529 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794543 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794555 | orchestrator | 2026-02-14 05:24:18.794569 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-14 05:24:18.794582 | orchestrator | Saturday 14 February 2026 05:24:13 +0000 (0:00:01.356) 0:01:16.671 ***** 2026-02-14 05:24:18.794596 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794609 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794621 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:18.794635 | orchestrator | 2026-02-14 05:24:18.794647 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-14 05:24:18.794661 | orchestrator | Saturday 14 February 2026 05:24:15 +0000 (0:00:01.469) 0:01:18.141 ***** 2026-02-14 05:24:18.794724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:18.794744 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:18.794759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:18.794784 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:18.794813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:36.059174 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059376 | orchestrator | 2026-02-14 05:24:36.059396 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-14 05:24:36.059410 | orchestrator | Saturday 14 February 2026 05:24:18 +0000 (0:00:03.494) 0:01:21.636 ***** 2026-02-14 05:24:36.059421 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059432 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059443 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059454 | orchestrator | 2026-02-14 05:24:36.059466 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-14 05:24:36.059503 | orchestrator | Saturday 14 February 2026 05:24:20 +0000 (0:00:01.607) 0:01:23.243 ***** 2026-02-14 05:24:36.059520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:36.059535 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:36.059603 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-14 05:24:36.059635 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059646 | orchestrator | 2026-02-14 05:24:36.059658 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-14 05:24:36.059670 | orchestrator | Saturday 14 February 2026 05:24:23 +0000 (0:00:03.567) 0:01:26.810 ***** 2026-02-14 05:24:36.059684 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059697 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059709 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059722 | orchestrator | 2026-02-14 05:24:36.059735 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-14 05:24:36.059748 | orchestrator | Saturday 14 February 2026 05:24:25 +0000 (0:00:01.863) 0:01:28.674 ***** 2026-02-14 05:24:36.059761 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059773 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059786 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059798 | orchestrator | 2026-02-14 05:24:36.059811 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-14 05:24:36.059825 | orchestrator | Saturday 14 February 2026 05:24:27 +0000 (0:00:01.380) 0:01:30.054 ***** 2026-02-14 05:24:36.059838 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059850 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059862 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059874 | orchestrator | 2026-02-14 05:24:36.059887 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-14 05:24:36.059900 | orchestrator | Saturday 14 February 2026 05:24:28 +0000 (0:00:01.504) 0:01:31.559 ***** 2026-02-14 05:24:36.059912 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.059926 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.059938 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.059951 | orchestrator | 2026-02-14 05:24:36.059963 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-14 05:24:36.059976 | orchestrator | Saturday 14 February 2026 05:24:30 +0000 (0:00:01.808) 0:01:33.367 ***** 2026-02-14 05:24:36.059989 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:24:36.060001 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:24:36.060014 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:24:36.060033 | orchestrator | 2026-02-14 05:24:36.060044 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-14 05:24:36.060060 | orchestrator | Saturday 14 February 2026 05:24:32 +0000 (0:00:02.045) 0:01:35.413 ***** 2026-02-14 05:24:36.060072 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:24:36.060084 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:24:36.060094 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:24:36.060106 | orchestrator | 2026-02-14 05:24:36.060117 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-14 05:24:36.060128 | orchestrator | Saturday 14 February 2026 05:24:34 +0000 (0:00:01.878) 0:01:37.291 ***** 2026-02-14 05:24:36.060140 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:24:36.060151 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:24:36.060161 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:24:36.060172 | orchestrator | 2026-02-14 05:24:36.060183 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-14 05:24:36.060194 | orchestrator | Saturday 14 February 2026 05:24:35 +0000 (0:00:01.389) 0:01:38.680 ***** 2026-02-14 05:24:36.060212 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.566731 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.566886 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.566904 | orchestrator | 2026-02-14 05:27:20.566917 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-14 05:27:20.566930 | orchestrator | Saturday 14 February 2026 05:24:37 +0000 (0:00:01.362) 0:01:40.043 ***** 2026-02-14 05:27:20.566941 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.566952 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.566963 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.566974 | orchestrator | 2026-02-14 05:27:20.566985 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-14 05:27:20.566996 | orchestrator | Saturday 14 February 2026 05:24:39 +0000 (0:00:02.232) 0:01:42.276 ***** 2026-02-14 05:27:20.567007 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567017 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.567028 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.567039 | orchestrator | 2026-02-14 05:27:20.567050 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-14 05:27:20.567061 | orchestrator | Saturday 14 February 2026 05:24:40 +0000 (0:00:01.366) 0:01:43.643 ***** 2026-02-14 05:27:20.567072 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.567084 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.567095 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.567106 | orchestrator | 2026-02-14 05:27:20.567116 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-14 05:27:20.567127 | orchestrator | Saturday 14 February 2026 05:24:42 +0000 (0:00:01.356) 0:01:44.999 ***** 2026-02-14 05:27:20.567138 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567149 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.567160 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.567171 | orchestrator | 2026-02-14 05:27:20.567182 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-14 05:27:20.567193 | orchestrator | Saturday 14 February 2026 05:24:45 +0000 (0:00:03.699) 0:01:48.698 ***** 2026-02-14 05:27:20.567203 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567226 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.567247 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.567260 | orchestrator | 2026-02-14 05:27:20.567273 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-14 05:27:20.567286 | orchestrator | Saturday 14 February 2026 05:24:47 +0000 (0:00:01.445) 0:01:50.144 ***** 2026-02-14 05:27:20.567298 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567310 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.567323 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.567335 | orchestrator | 2026-02-14 05:27:20.567348 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-14 05:27:20.567404 | orchestrator | Saturday 14 February 2026 05:24:48 +0000 (0:00:01.357) 0:01:51.501 ***** 2026-02-14 05:27:20.567417 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.567430 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.567442 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.567454 | orchestrator | 2026-02-14 05:27:20.567467 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 05:27:20.567480 | orchestrator | Saturday 14 February 2026 05:24:50 +0000 (0:00:01.756) 0:01:53.258 ***** 2026-02-14 05:27:20.567492 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.567504 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.567517 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.567529 | orchestrator | 2026-02-14 05:27:20.567541 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-14 05:27:20.567554 | orchestrator | Saturday 14 February 2026 05:24:51 +0000 (0:00:01.575) 0:01:54.834 ***** 2026-02-14 05:27:20.567566 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.567578 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.567590 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.567603 | orchestrator | 2026-02-14 05:27:20.567614 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-14 05:27:20.567624 | orchestrator | Saturday 14 February 2026 05:24:53 +0000 (0:00:01.663) 0:01:56.497 ***** 2026-02-14 05:27:20.567635 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:27:20.567646 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:27:20.567657 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:27:20.567667 | orchestrator | 2026-02-14 05:27:20.567678 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-14 05:27:20.567689 | orchestrator | Saturday 14 February 2026 05:24:55 +0000 (0:00:01.584) 0:01:58.082 ***** 2026-02-14 05:27:20.567700 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.567710 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.567721 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.567731 | orchestrator | 2026-02-14 05:27:20.567742 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-14 05:27:20.567753 | orchestrator | 2026-02-14 05:27:20.567763 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 05:27:20.567774 | orchestrator | Saturday 14 February 2026 05:24:57 +0000 (0:00:01.955) 0:02:00.038 ***** 2026-02-14 05:27:20.567784 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:27:20.567795 | orchestrator | 2026-02-14 05:27:20.567806 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 05:27:20.567817 | orchestrator | Saturday 14 February 2026 05:25:23 +0000 (0:00:26.713) 0:02:26.751 ***** 2026-02-14 05:27:20.567843 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567855 | orchestrator | 2026-02-14 05:27:20.567865 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 05:27:20.567876 | orchestrator | Saturday 14 February 2026 05:25:29 +0000 (0:00:05.591) 0:02:32.343 ***** 2026-02-14 05:27:20.567887 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.567898 | orchestrator | 2026-02-14 05:27:20.567908 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-14 05:27:20.567919 | orchestrator | 2026-02-14 05:27:20.567930 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 05:27:20.567940 | orchestrator | Saturday 14 February 2026 05:25:32 +0000 (0:00:02.986) 0:02:35.330 ***** 2026-02-14 05:27:20.567951 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:27:20.567962 | orchestrator | 2026-02-14 05:27:20.567972 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 05:27:20.568002 | orchestrator | Saturday 14 February 2026 05:25:59 +0000 (0:00:26.713) 0:03:02.043 ***** 2026-02-14 05:27:20.568014 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-02-14 05:27:20.568026 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.568048 | orchestrator | 2026-02-14 05:27:20.568059 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 05:27:20.568070 | orchestrator | Saturday 14 February 2026 05:26:07 +0000 (0:00:08.109) 0:03:10.153 ***** 2026-02-14 05:27:20.568081 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.568092 | orchestrator | 2026-02-14 05:27:20.568102 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-14 05:27:20.568113 | orchestrator | 2026-02-14 05:27:20.568123 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-14 05:27:20.568134 | orchestrator | Saturday 14 February 2026 05:26:11 +0000 (0:00:03.752) 0:03:13.905 ***** 2026-02-14 05:27:20.568145 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:27:20.568156 | orchestrator | 2026-02-14 05:27:20.568166 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-14 05:27:20.568177 | orchestrator | Saturday 14 February 2026 05:26:38 +0000 (0:00:27.675) 0:03:41.580 ***** 2026-02-14 05:27:20.568188 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-14 05:27:20.568198 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.568209 | orchestrator | 2026-02-14 05:27:20.568220 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-14 05:27:20.568230 | orchestrator | Saturday 14 February 2026 05:26:46 +0000 (0:00:07.924) 0:03:49.505 ***** 2026-02-14 05:27:20.568241 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-14 05:27:20.568252 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-14 05:27:20.568262 | orchestrator | mariadb_bootstrap_restart 2026-02-14 05:27:20.568273 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.568284 | orchestrator | 2026-02-14 05:27:20.568295 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-14 05:27:20.568305 | orchestrator | skipping: no hosts matched 2026-02-14 05:27:20.568316 | orchestrator | 2026-02-14 05:27:20.568327 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-14 05:27:20.568337 | orchestrator | skipping: no hosts matched 2026-02-14 05:27:20.568348 | orchestrator | 2026-02-14 05:27:20.568375 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-14 05:27:20.568387 | orchestrator | 2026-02-14 05:27:20.568398 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-14 05:27:20.568409 | orchestrator | Saturday 14 February 2026 05:26:50 +0000 (0:00:04.079) 0:03:53.585 ***** 2026-02-14 05:27:20.568419 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:27:20.568430 | orchestrator | 2026-02-14 05:27:20.568441 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-14 05:27:20.568451 | orchestrator | Saturday 14 February 2026 05:26:52 +0000 (0:00:01.872) 0:03:55.457 ***** 2026-02-14 05:27:20.568462 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568473 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568484 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.568494 | orchestrator | 2026-02-14 05:27:20.568505 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-14 05:27:20.568516 | orchestrator | Saturday 14 February 2026 05:26:55 +0000 (0:00:03.238) 0:03:58.695 ***** 2026-02-14 05:27:20.568526 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568537 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568548 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:27:20.568558 | orchestrator | 2026-02-14 05:27:20.568569 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-14 05:27:20.568580 | orchestrator | Saturday 14 February 2026 05:26:59 +0000 (0:00:03.178) 0:04:01.874 ***** 2026-02-14 05:27:20.568590 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568601 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568612 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.568623 | orchestrator | 2026-02-14 05:27:20.568640 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-14 05:27:20.568651 | orchestrator | Saturday 14 February 2026 05:27:02 +0000 (0:00:03.197) 0:04:05.072 ***** 2026-02-14 05:27:20.568662 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568673 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568683 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:27:20.568694 | orchestrator | 2026-02-14 05:27:20.568705 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-14 05:27:20.568715 | orchestrator | Saturday 14 February 2026 05:27:05 +0000 (0:00:03.166) 0:04:08.239 ***** 2026-02-14 05:27:20.568726 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.568737 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.568747 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.568758 | orchestrator | 2026-02-14 05:27:20.568769 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-14 05:27:20.568780 | orchestrator | Saturday 14 February 2026 05:27:11 +0000 (0:00:06.476) 0:04:14.716 ***** 2026-02-14 05:27:20.568790 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.568806 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568817 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568828 | orchestrator | 2026-02-14 05:27:20.568839 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-14 05:27:20.568850 | orchestrator | Saturday 14 February 2026 05:27:15 +0000 (0:00:03.652) 0:04:18.368 ***** 2026-02-14 05:27:20.568861 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:27:20.568871 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:27:20.568882 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:27:20.568893 | orchestrator | 2026-02-14 05:27:20.568903 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-14 05:27:20.568914 | orchestrator | Saturday 14 February 2026 05:27:17 +0000 (0:00:01.569) 0:04:19.938 ***** 2026-02-14 05:27:20.568925 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:27:20.568936 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:27:20.568946 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:27:20.568957 | orchestrator | 2026-02-14 05:27:20.568975 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-14 05:27:40.978690 | orchestrator | Saturday 14 February 2026 05:27:20 +0000 (0:00:03.469) 0:04:23.407 ***** 2026-02-14 05:27:40.979701 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:27:40.979772 | orchestrator | 2026-02-14 05:27:40.979799 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-14 05:27:40.979821 | orchestrator | Saturday 14 February 2026 05:27:22 +0000 (0:00:01.982) 0:04:25.390 ***** 2026-02-14 05:27:40.979842 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:27:40.979864 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:27:40.979884 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:27:40.979903 | orchestrator | 2026-02-14 05:27:40.979925 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:27:40.979947 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-14 05:27:40.979968 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-14 05:27:40.979989 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-14 05:27:40.980009 | orchestrator | 2026-02-14 05:27:40.980029 | orchestrator | 2026-02-14 05:27:40.980049 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:27:40.980070 | orchestrator | Saturday 14 February 2026 05:27:40 +0000 (0:00:17.975) 0:04:43.365 ***** 2026-02-14 05:27:40.980091 | orchestrator | =============================================================================== 2026-02-14 05:27:40.980147 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 81.10s 2026-02-14 05:27:40.980168 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.63s 2026-02-14 05:27:40.980187 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.98s 2026-02-14 05:27:40.980205 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.82s 2026-02-14 05:27:40.980224 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.48s 2026-02-14 05:27:40.980244 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.83s 2026-02-14 05:27:40.980265 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.54s 2026-02-14 05:27:40.980286 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.34s 2026-02-14 05:27:40.980305 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.25s 2026-02-14 05:27:40.980324 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.23s 2026-02-14 05:27:40.980344 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.75s 2026-02-14 05:27:40.980413 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.70s 2026-02-14 05:27:40.980435 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.65s 2026-02-14 05:27:40.980455 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.64s 2026-02-14 05:27:40.980474 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.57s 2026-02-14 05:27:40.980490 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.54s 2026-02-14 05:27:40.980509 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.49s 2026-02-14 05:27:40.980528 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.47s 2026-02-14 05:27:40.980546 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.24s 2026-02-14 05:27:40.980566 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.20s 2026-02-14 05:27:41.445299 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-14 05:27:43.450782 | orchestrator | 2026-02-14 05:27:43 | INFO  | Task f28ddf21-f27b-4d03-b9b5-6da7c1772ef7 (rabbitmq) was prepared for execution. 2026-02-14 05:27:43.450883 | orchestrator | 2026-02-14 05:27:43 | INFO  | It takes a moment until task f28ddf21-f27b-4d03-b9b5-6da7c1772ef7 (rabbitmq) has been started and output is visible here. 2026-02-14 05:28:26.949447 | orchestrator | 2026-02-14 05:28:26.949576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:28:26.949599 | orchestrator | 2026-02-14 05:28:26.949616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:28:26.949646 | orchestrator | Saturday 14 February 2026 05:27:49 +0000 (0:00:01.465) 0:00:01.465 ***** 2026-02-14 05:28:26.949662 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:26.949673 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:28:26.949683 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:28:26.949693 | orchestrator | 2026-02-14 05:28:26.949702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:28:26.949712 | orchestrator | Saturday 14 February 2026 05:27:51 +0000 (0:00:01.853) 0:00:03.318 ***** 2026-02-14 05:28:26.949722 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-14 05:28:26.949732 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-14 05:28:26.949741 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-14 05:28:26.949750 | orchestrator | 2026-02-14 05:28:26.949760 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-14 05:28:26.949769 | orchestrator | 2026-02-14 05:28:26.949779 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 05:28:26.949788 | orchestrator | Saturday 14 February 2026 05:27:53 +0000 (0:00:02.169) 0:00:05.487 ***** 2026-02-14 05:28:26.949815 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:28:26.949827 | orchestrator | 2026-02-14 05:28:26.949836 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-14 05:28:26.949846 | orchestrator | Saturday 14 February 2026 05:27:55 +0000 (0:00:02.066) 0:00:07.554 ***** 2026-02-14 05:28:26.949855 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:26.949865 | orchestrator | 2026-02-14 05:28:26.949875 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-14 05:28:26.949884 | orchestrator | Saturday 14 February 2026 05:27:58 +0000 (0:00:02.364) 0:00:09.919 ***** 2026-02-14 05:28:26.949895 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:26.949904 | orchestrator | 2026-02-14 05:28:26.949914 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-14 05:28:26.949923 | orchestrator | Saturday 14 February 2026 05:28:01 +0000 (0:00:03.544) 0:00:13.464 ***** 2026-02-14 05:28:26.949933 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:28:26.949943 | orchestrator | 2026-02-14 05:28:26.949952 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-14 05:28:26.949962 | orchestrator | Saturday 14 February 2026 05:28:11 +0000 (0:00:09.423) 0:00:22.887 ***** 2026-02-14 05:28:26.949971 | orchestrator | ok: [testbed-node-0] => { 2026-02-14 05:28:26.949981 | orchestrator |  "changed": false, 2026-02-14 05:28:26.949990 | orchestrator |  "msg": "All assertions passed" 2026-02-14 05:28:26.950000 | orchestrator | } 2026-02-14 05:28:26.950010 | orchestrator | 2026-02-14 05:28:26.950076 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-14 05:28:26.950089 | orchestrator | Saturday 14 February 2026 05:28:12 +0000 (0:00:01.319) 0:00:24.206 ***** 2026-02-14 05:28:26.950098 | orchestrator | ok: [testbed-node-0] => { 2026-02-14 05:28:26.950108 | orchestrator |  "changed": false, 2026-02-14 05:28:26.950117 | orchestrator |  "msg": "All assertions passed" 2026-02-14 05:28:26.950127 | orchestrator | } 2026-02-14 05:28:26.950137 | orchestrator | 2026-02-14 05:28:26.950146 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 05:28:26.950190 | orchestrator | Saturday 14 February 2026 05:28:14 +0000 (0:00:01.654) 0:00:25.861 ***** 2026-02-14 05:28:26.950200 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:28:26.950210 | orchestrator | 2026-02-14 05:28:26.950219 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-14 05:28:26.950228 | orchestrator | Saturday 14 February 2026 05:28:15 +0000 (0:00:01.778) 0:00:27.639 ***** 2026-02-14 05:28:26.950238 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:26.950247 | orchestrator | 2026-02-14 05:28:26.950257 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-14 05:28:26.950266 | orchestrator | Saturday 14 February 2026 05:28:18 +0000 (0:00:02.252) 0:00:29.892 ***** 2026-02-14 05:28:26.950276 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:26.950285 | orchestrator | 2026-02-14 05:28:26.950294 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-14 05:28:26.950304 | orchestrator | Saturday 14 February 2026 05:28:20 +0000 (0:00:02.803) 0:00:32.696 ***** 2026-02-14 05:28:26.950313 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:28:26.950323 | orchestrator | 2026-02-14 05:28:26.950332 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-14 05:28:26.950342 | orchestrator | Saturday 14 February 2026 05:28:22 +0000 (0:00:01.893) 0:00:34.589 ***** 2026-02-14 05:28:26.950406 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:26.950443 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:26.950461 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:26.950479 | orchestrator | 2026-02-14 05:28:26.950496 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-14 05:28:26.950512 | orchestrator | Saturday 14 February 2026 05:28:24 +0000 (0:00:01.747) 0:00:36.337 ***** 2026-02-14 05:28:26.950529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:26.950561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:47.410125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:47.410244 | orchestrator | 2026-02-14 05:28:47.410261 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-14 05:28:47.410274 | orchestrator | Saturday 14 February 2026 05:28:26 +0000 (0:00:02.438) 0:00:38.776 ***** 2026-02-14 05:28:47.410286 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 05:28:47.410297 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 05:28:47.410309 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-14 05:28:47.410320 | orchestrator | 2026-02-14 05:28:47.410331 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-14 05:28:47.410342 | orchestrator | Saturday 14 February 2026 05:28:29 +0000 (0:00:02.427) 0:00:41.203 ***** 2026-02-14 05:28:47.410353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 05:28:47.410364 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 05:28:47.410374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-14 05:28:47.410489 | orchestrator | 2026-02-14 05:28:47.410505 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-14 05:28:47.410516 | orchestrator | Saturday 14 February 2026 05:28:32 +0000 (0:00:03.219) 0:00:44.422 ***** 2026-02-14 05:28:47.410527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 05:28:47.410538 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 05:28:47.410577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-14 05:28:47.410591 | orchestrator | 2026-02-14 05:28:47.410603 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-14 05:28:47.410616 | orchestrator | Saturday 14 February 2026 05:28:35 +0000 (0:00:03.270) 0:00:47.693 ***** 2026-02-14 05:28:47.410628 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 05:28:47.410640 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 05:28:47.410652 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-14 05:28:47.410665 | orchestrator | 2026-02-14 05:28:47.410678 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-14 05:28:47.410690 | orchestrator | Saturday 14 February 2026 05:28:38 +0000 (0:00:02.441) 0:00:50.134 ***** 2026-02-14 05:28:47.410703 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 05:28:47.410715 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 05:28:47.410727 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-14 05:28:47.410737 | orchestrator | 2026-02-14 05:28:47.410748 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-14 05:28:47.410759 | orchestrator | Saturday 14 February 2026 05:28:40 +0000 (0:00:02.367) 0:00:52.501 ***** 2026-02-14 05:28:47.410769 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 05:28:47.410794 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 05:28:47.410806 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-14 05:28:47.410817 | orchestrator | 2026-02-14 05:28:47.410828 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-14 05:28:47.410839 | orchestrator | Saturday 14 February 2026 05:28:43 +0000 (0:00:02.582) 0:00:55.084 ***** 2026-02-14 05:28:47.410849 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:28:47.410860 | orchestrator | 2026-02-14 05:28:47.410890 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-14 05:28:47.410902 | orchestrator | Saturday 14 February 2026 05:28:44 +0000 (0:00:01.682) 0:00:56.766 ***** 2026-02-14 05:28:47.410915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:47.410928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:47.410949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:47.410961 | orchestrator | 2026-02-14 05:28:47.410973 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-14 05:28:47.410984 | orchestrator | Saturday 14 February 2026 05:28:47 +0000 (0:00:02.255) 0:00:59.022 ***** 2026-02-14 05:28:47.411048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.227988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.228160 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:28:56.228197 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:28:56.228223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.228238 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:28:56.228249 | orchestrator | 2026-02-14 05:28:56.228261 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-14 05:28:56.228274 | orchestrator | Saturday 14 February 2026 05:28:48 +0000 (0:00:01.403) 0:01:00.426 ***** 2026-02-14 05:28:56.228304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.228342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.228365 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:28:56.228421 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:28:56.228435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:28:56.228446 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:28:56.228457 | orchestrator | 2026-02-14 05:28:56.228468 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-14 05:28:56.228479 | orchestrator | Saturday 14 February 2026 05:28:50 +0000 (0:00:01.776) 0:01:02.202 ***** 2026-02-14 05:28:56.228490 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:28:56.228502 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:28:56.228515 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:28:56.228527 | orchestrator | 2026-02-14 05:28:56.228540 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-14 05:28:56.228552 | orchestrator | Saturday 14 February 2026 05:28:53 +0000 (0:00:03.614) 0:01:05.817 ***** 2026-02-14 05:28:56.228572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:28:56.228596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:30:40.958360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-14 05:30:40.958529 | orchestrator | 2026-02-14 05:30:40.958549 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-14 05:30:40.958562 | orchestrator | Saturday 14 February 2026 05:28:56 +0000 (0:00:02.244) 0:01:08.061 ***** 2026-02-14 05:30:40.958575 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:30:40.958586 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:30:40.958598 | orchestrator | } 2026-02-14 05:30:40.958609 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:30:40.958620 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:30:40.958631 | orchestrator | } 2026-02-14 05:30:40.958642 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:30:40.958652 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:30:40.958663 | orchestrator | } 2026-02-14 05:30:40.958674 | orchestrator | 2026-02-14 05:30:40.958686 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:30:40.958697 | orchestrator | Saturday 14 February 2026 05:28:57 +0000 (0:00:01.355) 0:01:09.417 ***** 2026-02-14 05:30:40.958711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:30:40.958723 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:30:40.958735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:30:40.958774 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:30:40.958806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-14 05:30:40.958819 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:30:40.958830 | orchestrator | 2026-02-14 05:30:40.958841 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-14 05:30:40.958852 | orchestrator | Saturday 14 February 2026 05:28:59 +0000 (0:00:02.080) 0:01:11.498 ***** 2026-02-14 05:30:40.958863 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:30:40.958874 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:30:40.958887 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:30:40.958899 | orchestrator | 2026-02-14 05:30:40.958911 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 05:30:40.958923 | orchestrator | 2026-02-14 05:30:40.958936 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 05:30:40.958948 | orchestrator | Saturday 14 February 2026 05:29:01 +0000 (0:00:02.021) 0:01:13.519 ***** 2026-02-14 05:30:40.958961 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:30:40.958973 | orchestrator | 2026-02-14 05:30:40.958983 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 05:30:40.958994 | orchestrator | Saturday 14 February 2026 05:29:03 +0000 (0:00:02.103) 0:01:15.623 ***** 2026-02-14 05:30:40.959005 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:30:40.959016 | orchestrator | 2026-02-14 05:30:40.959027 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 05:30:40.959038 | orchestrator | Saturday 14 February 2026 05:29:13 +0000 (0:00:09.389) 0:01:25.012 ***** 2026-02-14 05:30:40.959048 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:30:40.959059 | orchestrator | 2026-02-14 05:30:40.959070 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 05:30:40.959081 | orchestrator | Saturday 14 February 2026 05:29:22 +0000 (0:00:09.106) 0:01:34.119 ***** 2026-02-14 05:30:40.959092 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:30:40.959103 | orchestrator | 2026-02-14 05:30:40.959114 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 05:30:40.959125 | orchestrator | 2026-02-14 05:30:40.959136 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 05:30:40.959146 | orchestrator | Saturday 14 February 2026 05:29:31 +0000 (0:00:09.424) 0:01:43.543 ***** 2026-02-14 05:30:40.959157 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:30:40.959168 | orchestrator | 2026-02-14 05:30:40.959179 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 05:30:40.959297 | orchestrator | Saturday 14 February 2026 05:29:33 +0000 (0:00:01.616) 0:01:45.160 ***** 2026-02-14 05:30:40.959317 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:30:40.959329 | orchestrator | 2026-02-14 05:30:40.959340 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 05:30:40.959350 | orchestrator | Saturday 14 February 2026 05:29:42 +0000 (0:00:08.843) 0:01:54.003 ***** 2026-02-14 05:30:40.959361 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:30:40.959372 | orchestrator | 2026-02-14 05:30:40.959383 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 05:30:40.959393 | orchestrator | Saturday 14 February 2026 05:29:56 +0000 (0:00:14.396) 0:02:08.400 ***** 2026-02-14 05:30:40.959404 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:30:40.959415 | orchestrator | 2026-02-14 05:30:40.959476 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-14 05:30:40.959493 | orchestrator | 2026-02-14 05:30:40.959504 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-14 05:30:40.959515 | orchestrator | Saturday 14 February 2026 05:30:06 +0000 (0:00:10.039) 0:02:18.440 ***** 2026-02-14 05:30:40.959526 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:30:40.959538 | orchestrator | 2026-02-14 05:30:40.959549 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-14 05:30:40.959560 | orchestrator | Saturday 14 February 2026 05:30:08 +0000 (0:00:01.746) 0:02:20.187 ***** 2026-02-14 05:30:40.959571 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:30:40.959582 | orchestrator | 2026-02-14 05:30:40.959593 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-14 05:30:40.959604 | orchestrator | Saturday 14 February 2026 05:30:17 +0000 (0:00:08.699) 0:02:28.887 ***** 2026-02-14 05:30:40.959615 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:30:40.959626 | orchestrator | 2026-02-14 05:30:40.959637 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-14 05:30:40.959648 | orchestrator | Saturday 14 February 2026 05:30:31 +0000 (0:00:13.998) 0:02:42.885 ***** 2026-02-14 05:30:40.959659 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:30:40.959670 | orchestrator | 2026-02-14 05:30:40.959681 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-14 05:30:40.959692 | orchestrator | 2026-02-14 05:30:40.959703 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-14 05:30:40.959725 | orchestrator | Saturday 14 February 2026 05:30:40 +0000 (0:00:09.899) 0:02:52.784 ***** 2026-02-14 05:30:47.149938 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:30:47.150103 | orchestrator | 2026-02-14 05:30:47.150121 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-14 05:30:47.150133 | orchestrator | Saturday 14 February 2026 05:30:42 +0000 (0:00:01.311) 0:02:54.096 ***** 2026-02-14 05:30:47.150144 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:30:47.150191 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:30:47.150203 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:30:47.150214 | orchestrator | 2026-02-14 05:30:47.150224 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:30:47.150237 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:30:47.150249 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 05:30:47.150261 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-14 05:30:47.150272 | orchestrator | 2026-02-14 05:30:47.150283 | orchestrator | 2026-02-14 05:30:47.150295 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:30:47.150306 | orchestrator | Saturday 14 February 2026 05:30:46 +0000 (0:00:04.498) 0:02:58.595 ***** 2026-02-14 05:30:47.150340 | orchestrator | =============================================================================== 2026-02-14 05:30:47.150352 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.50s 2026-02-14 05:30:47.150362 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.36s 2026-02-14 05:30:47.150373 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 26.93s 2026-02-14 05:30:47.150384 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.42s 2026-02-14 05:30:47.150395 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.47s 2026-02-14 05:30:47.150405 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.50s 2026-02-14 05:30:47.150416 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.61s 2026-02-14 05:30:47.150427 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.54s 2026-02-14 05:30:47.150491 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.27s 2026-02-14 05:30:47.150504 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.22s 2026-02-14 05:30:47.150516 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.80s 2026-02-14 05:30:47.150528 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.58s 2026-02-14 05:30:47.150541 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.44s 2026-02-14 05:30:47.150554 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.44s 2026-02-14 05:30:47.150566 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.43s 2026-02-14 05:30:47.150577 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.37s 2026-02-14 05:30:47.150588 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.36s 2026-02-14 05:30:47.150599 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.26s 2026-02-14 05:30:47.150610 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.25s 2026-02-14 05:30:47.150620 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.24s 2026-02-14 05:30:47.473808 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-14 05:30:49.517851 | orchestrator | 2026-02-14 05:30:49 | INFO  | Task b20938ab-50a0-4dee-b645-d4eb4dc9be17 (openvswitch) was prepared for execution. 2026-02-14 05:30:49.517972 | orchestrator | 2026-02-14 05:30:49 | INFO  | It takes a moment until task b20938ab-50a0-4dee-b645-d4eb4dc9be17 (openvswitch) has been started and output is visible here. 2026-02-14 05:31:07.302473 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-14 05:31:07.302595 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-14 05:31:07.302624 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-14 05:31:07.302636 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-14 05:31:07.302658 | orchestrator | 2026-02-14 05:31:07.302671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:31:07.302682 | orchestrator | 2026-02-14 05:31:07.302693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:31:07.302704 | orchestrator | Saturday 14 February 2026 05:30:54 +0000 (0:00:01.091) 0:00:01.091 ***** 2026-02-14 05:31:07.302715 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:31:07.302727 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:31:07.302738 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:31:07.302749 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:31:07.302760 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:31:07.302798 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:31:07.302810 | orchestrator | 2026-02-14 05:31:07.302821 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:31:07.302832 | orchestrator | Saturday 14 February 2026 05:30:56 +0000 (0:00:01.515) 0:00:02.607 ***** 2026-02-14 05:31:07.302843 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302854 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302865 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302875 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302886 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302897 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-14 05:31:07.302907 | orchestrator | 2026-02-14 05:31:07.302919 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-14 05:31:07.302929 | orchestrator | 2026-02-14 05:31:07.302940 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-14 05:31:07.302951 | orchestrator | Saturday 14 February 2026 05:30:57 +0000 (0:00:01.113) 0:00:03.721 ***** 2026-02-14 05:31:07.302962 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:31:07.302975 | orchestrator | 2026-02-14 05:31:07.302988 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-14 05:31:07.303001 | orchestrator | Saturday 14 February 2026 05:30:59 +0000 (0:00:02.436) 0:00:06.158 ***** 2026-02-14 05:31:07.303014 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-14 05:31:07.303028 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-14 05:31:07.303040 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-14 05:31:07.303052 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-14 05:31:07.303065 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-14 05:31:07.303078 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-14 05:31:07.303091 | orchestrator | 2026-02-14 05:31:07.303104 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-14 05:31:07.303117 | orchestrator | Saturday 14 February 2026 05:31:01 +0000 (0:00:01.436) 0:00:07.594 ***** 2026-02-14 05:31:07.303130 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-14 05:31:07.303142 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-14 05:31:07.303154 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-14 05:31:07.303167 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-14 05:31:07.303179 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-14 05:31:07.303192 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-14 05:31:07.303204 | orchestrator | 2026-02-14 05:31:07.303217 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-14 05:31:07.303230 | orchestrator | Saturday 14 February 2026 05:31:02 +0000 (0:00:01.427) 0:00:09.022 ***** 2026-02-14 05:31:07.303242 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-14 05:31:07.303255 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:31:07.303268 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-14 05:31:07.303280 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:31:07.303293 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-14 05:31:07.303306 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:31:07.303319 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-14 05:31:07.303332 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:07.303343 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-14 05:31:07.303354 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:07.303373 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-14 05:31:07.303384 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:07.303395 | orchestrator | 2026-02-14 05:31:07.303406 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-14 05:31:07.303416 | orchestrator | Saturday 14 February 2026 05:31:04 +0000 (0:00:01.878) 0:00:10.900 ***** 2026-02-14 05:31:07.303427 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:31:07.303480 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:31:07.303493 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:31:07.303504 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:07.303515 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:07.303544 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:07.303556 | orchestrator | 2026-02-14 05:31:07.303567 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-14 05:31:07.303579 | orchestrator | Saturday 14 February 2026 05:31:05 +0000 (0:00:01.018) 0:00:11.919 ***** 2026-02-14 05:31:07.303592 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:07.303610 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:07.303622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:07.303633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:07.303653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:07.303680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516711 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516813 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516841 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516927 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516940 | orchestrator | 2026-02-14 05:31:09.516953 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-14 05:31:09.516965 | orchestrator | Saturday 14 February 2026 05:31:07 +0000 (0:00:01.657) 0:00:13.576 ***** 2026-02-14 05:31:09.516977 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.516989 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.517000 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.517021 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.517038 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:09.517058 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973158 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973326 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973369 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973398 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973432 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973507 | orchestrator | 2026-02-14 05:31:12.973522 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-14 05:31:12.973536 | orchestrator | Saturday 14 February 2026 05:31:09 +0000 (0:00:02.334) 0:00:15.910 ***** 2026-02-14 05:31:12.973547 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:31:12.973559 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:31:12.973569 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:31:12.973580 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:12.973590 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:12.973601 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:12.973611 | orchestrator | 2026-02-14 05:31:12.973622 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-14 05:31:12.973633 | orchestrator | Saturday 14 February 2026 05:31:10 +0000 (0:00:01.335) 0:00:17.246 ***** 2026-02-14 05:31:12.973644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:12.973727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315609 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-14 05:31:14.315631 | orchestrator | 2026-02-14 05:31:14.315644 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-14 05:31:14.315656 | orchestrator | Saturday 14 February 2026 05:31:13 +0000 (0:00:02.132) 0:00:19.378 ***** 2026-02-14 05:31:14.315668 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:31:14.315680 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315691 | orchestrator | } 2026-02-14 05:31:14.315702 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:31:14.315713 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315723 | orchestrator | } 2026-02-14 05:31:14.315734 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:31:14.315744 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315756 | orchestrator | } 2026-02-14 05:31:14.315766 | orchestrator | changed: [testbed-node-3] => { 2026-02-14 05:31:14.315777 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315788 | orchestrator | } 2026-02-14 05:31:14.315798 | orchestrator | changed: [testbed-node-4] => { 2026-02-14 05:31:14.315809 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315820 | orchestrator | } 2026-02-14 05:31:14.315830 | orchestrator | changed: [testbed-node-5] => { 2026-02-14 05:31:14.315841 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:31:14.315854 | orchestrator | } 2026-02-14 05:31:14.315866 | orchestrator | 2026-02-14 05:31:14.315879 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:31:14.315892 | orchestrator | Saturday 14 February 2026 05:31:13 +0000 (0:00:00.873) 0:00:20.251 ***** 2026-02-14 05:31:14.315912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:14.315926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:14.315940 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:31:14.315953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:14.315981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:38.745825 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:31:38.745931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:38.745946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:38.745957 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:31:38.745982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:38.745992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:38.746073 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-14 05:31:38.746085 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-14 05:31:38.746104 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:38.746113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:38.746166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:38.746178 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:38.746187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-14 05:31:38.746204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-14 05:31:38.746214 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:38.746223 | orchestrator | 2026-02-14 05:31:38.746232 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746241 | orchestrator | Saturday 14 February 2026 05:31:15 +0000 (0:00:01.984) 0:00:22.235 ***** 2026-02-14 05:31:38.746250 | orchestrator | 2026-02-14 05:31:38.746266 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746274 | orchestrator | Saturday 14 February 2026 05:31:16 +0000 (0:00:00.149) 0:00:22.385 ***** 2026-02-14 05:31:38.746283 | orchestrator | 2026-02-14 05:31:38.746291 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746300 | orchestrator | Saturday 14 February 2026 05:31:16 +0000 (0:00:00.156) 0:00:22.541 ***** 2026-02-14 05:31:38.746309 | orchestrator | 2026-02-14 05:31:38.746317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746326 | orchestrator | Saturday 14 February 2026 05:31:16 +0000 (0:00:00.142) 0:00:22.684 ***** 2026-02-14 05:31:38.746335 | orchestrator | 2026-02-14 05:31:38.746343 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746352 | orchestrator | Saturday 14 February 2026 05:31:16 +0000 (0:00:00.347) 0:00:23.031 ***** 2026-02-14 05:31:38.746360 | orchestrator | 2026-02-14 05:31:38.746370 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-14 05:31:38.746380 | orchestrator | Saturday 14 February 2026 05:31:16 +0000 (0:00:00.147) 0:00:23.178 ***** 2026-02-14 05:31:38.746390 | orchestrator | 2026-02-14 05:31:38.746399 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-14 05:31:38.746409 | orchestrator | Saturday 14 February 2026 05:31:17 +0000 (0:00:00.151) 0:00:23.329 ***** 2026-02-14 05:31:38.746418 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:31:38.746429 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:31:38.746438 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:31:38.746485 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:31:38.746513 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:31:38.746530 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:31:38.746545 | orchestrator | 2026-02-14 05:31:38.746562 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-14 05:31:38.746579 | orchestrator | Saturday 14 February 2026 05:31:27 +0000 (0:00:10.549) 0:00:33.878 ***** 2026-02-14 05:31:38.746597 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:31:38.746613 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:31:38.746623 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:31:38.746633 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:31:38.746642 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:31:38.746652 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:31:38.746662 | orchestrator | 2026-02-14 05:31:38.746672 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-14 05:31:38.746683 | orchestrator | Saturday 14 February 2026 05:31:28 +0000 (0:00:01.143) 0:00:35.022 ***** 2026-02-14 05:31:38.746693 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:31:38.746710 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:31:52.045676 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:31:52.045796 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:31:52.045812 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:31:52.045824 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:31:52.045835 | orchestrator | 2026-02-14 05:31:52.045847 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-14 05:31:52.045860 | orchestrator | Saturday 14 February 2026 05:31:38 +0000 (0:00:10.000) 0:00:45.022 ***** 2026-02-14 05:31:52.045871 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-14 05:31:52.045884 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-14 05:31:52.045895 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-14 05:31:52.045906 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-14 05:31:52.045916 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-14 05:31:52.045951 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-14 05:31:52.045963 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-14 05:31:52.045973 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-14 05:31:52.045984 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-14 05:31:52.045995 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-14 05:31:52.046005 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-14 05:31:52.046074 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-14 05:31:52.046087 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046112 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046123 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046134 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046144 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046155 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-14 05:31:52.046166 | orchestrator | 2026-02-14 05:31:52.046177 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-14 05:31:52.046188 | orchestrator | Saturday 14 February 2026 05:31:44 +0000 (0:00:06.253) 0:00:51.276 ***** 2026-02-14 05:31:52.046199 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-14 05:31:52.046210 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:52.046221 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-14 05:31:52.046231 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:52.046242 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-14 05:31:52.046253 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:52.046263 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-14 05:31:52.046274 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-14 05:31:52.046285 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-14 05:31:52.046295 | orchestrator | 2026-02-14 05:31:52.046306 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-14 05:31:52.046317 | orchestrator | Saturday 14 February 2026 05:31:47 +0000 (0:00:02.346) 0:00:53.623 ***** 2026-02-14 05:31:52.046328 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-14 05:31:52.046339 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:31:52.046350 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-14 05:31:52.046361 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:31:52.046372 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-14 05:31:52.046383 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:31:52.046393 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-14 05:31:52.046404 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-14 05:31:52.046415 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-14 05:31:52.046426 | orchestrator | 2026-02-14 05:31:52.046436 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:31:52.046448 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:31:52.046492 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:31:52.046522 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-14 05:31:52.046534 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:31:52.046545 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:31:52.046556 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-14 05:31:52.046566 | orchestrator | 2026-02-14 05:31:52.046577 | orchestrator | 2026-02-14 05:31:52.046588 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:31:52.046598 | orchestrator | Saturday 14 February 2026 05:31:51 +0000 (0:00:04.263) 0:00:57.886 ***** 2026-02-14 05:31:52.046609 | orchestrator | =============================================================================== 2026-02-14 05:31:52.046619 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.55s 2026-02-14 05:31:52.046630 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.00s 2026-02-14 05:31:52.046640 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.25s 2026-02-14 05:31:52.046651 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.26s 2026-02-14 05:31:52.046661 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.44s 2026-02-14 05:31:52.046672 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.35s 2026-02-14 05:31:52.046682 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.33s 2026-02-14 05:31:52.046693 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.13s 2026-02-14 05:31:52.046703 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-02-14 05:31:52.046714 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.88s 2026-02-14 05:31:52.046725 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.66s 2026-02-14 05:31:52.046743 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.52s 2026-02-14 05:31:52.046768 | orchestrator | module-load : Load modules ---------------------------------------------- 1.44s 2026-02-14 05:31:52.046787 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2026-02-14 05:31:52.046806 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.34s 2026-02-14 05:31:52.046821 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.14s 2026-02-14 05:31:52.046832 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.11s 2026-02-14 05:31:52.046843 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.09s 2026-02-14 05:31:52.046853 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2026-02-14 05:31:52.046864 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.87s 2026-02-14 05:31:52.370979 | orchestrator | + osism apply -a upgrade ovn 2026-02-14 05:31:54.452775 | orchestrator | 2026-02-14 05:31:54 | INFO  | Task d0bfd67e-a64b-4e81-9b16-f0ecad3cd395 (ovn) was prepared for execution. 2026-02-14 05:31:54.452975 | orchestrator | 2026-02-14 05:31:54 | INFO  | It takes a moment until task d0bfd67e-a64b-4e81-9b16-f0ecad3cd395 (ovn) has been started and output is visible here. 2026-02-14 05:32:09.389874 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-14 05:32:09.390117 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-14 05:32:09.390177 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-14 05:32:09.390190 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-14 05:32:09.390212 | orchestrator | 2026-02-14 05:32:09.390224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-14 05:32:09.390235 | orchestrator | 2026-02-14 05:32:09.390246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-14 05:32:09.390258 | orchestrator | Saturday 14 February 2026 05:31:59 +0000 (0:00:01.256) 0:00:01.256 ***** 2026-02-14 05:32:09.390269 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:32:09.390280 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:32:09.390291 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:32:09.390301 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:32:09.390312 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:32:09.390322 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:32:09.390333 | orchestrator | 2026-02-14 05:32:09.390343 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-14 05:32:09.390354 | orchestrator | Saturday 14 February 2026 05:32:01 +0000 (0:00:01.904) 0:00:03.160 ***** 2026-02-14 05:32:09.390365 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-14 05:32:09.390376 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-14 05:32:09.390386 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-14 05:32:09.390397 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-14 05:32:09.390408 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-14 05:32:09.390418 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-14 05:32:09.390429 | orchestrator | 2026-02-14 05:32:09.390439 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-14 05:32:09.390450 | orchestrator | 2026-02-14 05:32:09.390490 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-14 05:32:09.390511 | orchestrator | Saturday 14 February 2026 05:32:03 +0000 (0:00:01.301) 0:00:04.462 ***** 2026-02-14 05:32:09.390530 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:32:09.390550 | orchestrator | 2026-02-14 05:32:09.390569 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-14 05:32:09.390588 | orchestrator | Saturday 14 February 2026 05:32:05 +0000 (0:00:02.065) 0:00:06.527 ***** 2026-02-14 05:32:09.390610 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390683 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390716 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390729 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390740 | orchestrator | 2026-02-14 05:32:09.390751 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-14 05:32:09.390762 | orchestrator | Saturday 14 February 2026 05:32:06 +0000 (0:00:01.423) 0:00:07.950 ***** 2026-02-14 05:32:09.390773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390785 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390807 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390818 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390840 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390852 | orchestrator | 2026-02-14 05:32:09.390863 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-14 05:32:09.390874 | orchestrator | Saturday 14 February 2026 05:32:08 +0000 (0:00:01.503) 0:00:09.454 ***** 2026-02-14 05:32:09.390885 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:09.390905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.743868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.743970 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.743984 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.743995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744005 | orchestrator | 2026-02-14 05:32:13.744016 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-14 05:32:13.744027 | orchestrator | Saturday 14 February 2026 05:32:09 +0000 (0:00:01.246) 0:00:10.701 ***** 2026-02-14 05:32:13.744037 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744098 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744107 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744135 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744145 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744155 | orchestrator | 2026-02-14 05:32:13.744165 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-14 05:32:13.744174 | orchestrator | Saturday 14 February 2026 05:32:11 +0000 (0:00:02.007) 0:00:12.709 ***** 2026-02-14 05:32:13.744185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:32:13.744259 | orchestrator | 2026-02-14 05:32:13.744269 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-14 05:32:13.744279 | orchestrator | Saturday 14 February 2026 05:32:12 +0000 (0:00:01.391) 0:00:14.100 ***** 2026-02-14 05:32:13.744289 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:32:13.744300 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:13.744310 | orchestrator | } 2026-02-14 05:32:13.744320 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:32:13.744330 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:13.744339 | orchestrator | } 2026-02-14 05:32:13.744348 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:32:13.744358 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:13.744368 | orchestrator | } 2026-02-14 05:32:13.744379 | orchestrator | changed: [testbed-node-3] => { 2026-02-14 05:32:13.744390 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:13.744401 | orchestrator | } 2026-02-14 05:32:13.744413 | orchestrator | changed: [testbed-node-4] => { 2026-02-14 05:32:13.744423 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:13.744435 | orchestrator | } 2026-02-14 05:32:13.744452 | orchestrator | changed: [testbed-node-5] => { 2026-02-14 05:32:38.308917 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:32:38.309018 | orchestrator | } 2026-02-14 05:32:38.309029 | orchestrator | 2026-02-14 05:32:38.309036 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:32:38.309045 | orchestrator | Saturday 14 February 2026 05:32:13 +0000 (0:00:00.949) 0:00:15.050 ***** 2026-02-14 05:32:38.309054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309064 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:32:38.309071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309099 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:32:38.309106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309112 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:32:38.309119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309124 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:32:38.309130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309137 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:32:38.309159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:32:38.309164 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:32:38.309168 | orchestrator | 2026-02-14 05:32:38.309172 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-14 05:32:38.309176 | orchestrator | Saturday 14 February 2026 05:32:15 +0000 (0:00:01.662) 0:00:16.713 ***** 2026-02-14 05:32:38.309180 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:32:38.309185 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:32:38.309189 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:32:38.309192 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:32:38.309196 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:32:38.309200 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:32:38.309203 | orchestrator | 2026-02-14 05:32:38.309207 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-14 05:32:38.309212 | orchestrator | Saturday 14 February 2026 05:32:17 +0000 (0:00:02.467) 0:00:19.180 ***** 2026-02-14 05:32:38.309216 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-14 05:32:38.309220 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-14 05:32:38.309228 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-14 05:32:38.309244 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-14 05:32:38.309252 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-14 05:32:38.309256 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-14 05:32:38.309260 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-14 05:32:38.309263 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-14 05:32:38.309267 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309271 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309275 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309278 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309282 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309286 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-14 05:32:38.309290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309295 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309298 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309302 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309310 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-14 05:32:38.309313 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309317 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309321 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309325 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309329 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309332 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-14 05:32:38.309336 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309340 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309343 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309350 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309353 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309357 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-14 05:32:38.309361 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309365 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309368 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309375 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309379 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309382 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-14 05:32:38.309386 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 05:32:38.309390 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 05:32:38.309394 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 05:32:38.309398 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 05:32:38.309404 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-14 05:35:00.803788 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-14 05:35:00.803877 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-14 05:35:00.803889 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-14 05:35:00.803896 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-14 05:35:00.803903 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-14 05:35:00.803909 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-14 05:35:00.803915 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-14 05:35:00.803922 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 05:35:00.803929 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 05:35:00.803935 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 05:35:00.803943 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 05:35:00.803949 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-14 05:35:00.803955 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-14 05:35:00.803962 | orchestrator | 2026-02-14 05:35:00.803969 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.803975 | orchestrator | Saturday 14 February 2026 05:32:37 +0000 (0:00:19.876) 0:00:39.056 ***** 2026-02-14 05:35:00.803981 | orchestrator | 2026-02-14 05:35:00.803988 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.803994 | orchestrator | Saturday 14 February 2026 05:32:37 +0000 (0:00:00.096) 0:00:39.153 ***** 2026-02-14 05:35:00.804000 | orchestrator | 2026-02-14 05:35:00.804006 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.804012 | orchestrator | Saturday 14 February 2026 05:32:37 +0000 (0:00:00.079) 0:00:39.233 ***** 2026-02-14 05:35:00.804018 | orchestrator | 2026-02-14 05:35:00.804024 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.804051 | orchestrator | Saturday 14 February 2026 05:32:37 +0000 (0:00:00.080) 0:00:39.314 ***** 2026-02-14 05:35:00.804057 | orchestrator | 2026-02-14 05:35:00.804064 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.804070 | orchestrator | Saturday 14 February 2026 05:32:38 +0000 (0:00:00.110) 0:00:39.425 ***** 2026-02-14 05:35:00.804076 | orchestrator | 2026-02-14 05:35:00.804082 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-14 05:35:00.804088 | orchestrator | Saturday 14 February 2026 05:32:38 +0000 (0:00:00.085) 0:00:39.510 ***** 2026-02-14 05:35:00.804094 | orchestrator | 2026-02-14 05:35:00.804112 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-14 05:35:00.804119 | orchestrator | Saturday 14 February 2026 05:32:38 +0000 (0:00:00.078) 0:00:39.588 ***** 2026-02-14 05:35:00.804125 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:35:00.804132 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:35:00.804138 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:35:00.804144 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:35:00.804150 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:35:00.804156 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:35:00.804162 | orchestrator | 2026-02-14 05:35:00.804169 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-14 05:35:00.804175 | orchestrator | 2026-02-14 05:35:00.804181 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 05:35:00.804187 | orchestrator | Saturday 14 February 2026 05:34:49 +0000 (0:02:10.866) 0:02:50.455 ***** 2026-02-14 05:35:00.804193 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:35:00.804199 | orchestrator | 2026-02-14 05:35:00.804205 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 05:35:00.804212 | orchestrator | Saturday 14 February 2026 05:34:50 +0000 (0:00:01.160) 0:02:51.615 ***** 2026-02-14 05:35:00.804218 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-14 05:35:00.804224 | orchestrator | 2026-02-14 05:35:00.804230 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-14 05:35:00.804236 | orchestrator | Saturday 14 February 2026 05:34:51 +0000 (0:00:01.160) 0:02:52.776 ***** 2026-02-14 05:35:00.804242 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804249 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804255 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804261 | orchestrator | 2026-02-14 05:35:00.804267 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-14 05:35:00.804285 | orchestrator | Saturday 14 February 2026 05:34:52 +0000 (0:00:00.822) 0:02:53.598 ***** 2026-02-14 05:35:00.804291 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804298 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804304 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804310 | orchestrator | 2026-02-14 05:35:00.804316 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-14 05:35:00.804322 | orchestrator | Saturday 14 February 2026 05:34:52 +0000 (0:00:00.350) 0:02:53.949 ***** 2026-02-14 05:35:00.804328 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804335 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804341 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804348 | orchestrator | 2026-02-14 05:35:00.804356 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-14 05:35:00.804362 | orchestrator | Saturday 14 February 2026 05:34:52 +0000 (0:00:00.345) 0:02:54.294 ***** 2026-02-14 05:35:00.804369 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804376 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804383 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804390 | orchestrator | 2026-02-14 05:35:00.804398 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-14 05:35:00.804410 | orchestrator | Saturday 14 February 2026 05:34:53 +0000 (0:00:00.598) 0:02:54.893 ***** 2026-02-14 05:35:00.804417 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804424 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804431 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804438 | orchestrator | 2026-02-14 05:35:00.804446 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-14 05:35:00.804453 | orchestrator | Saturday 14 February 2026 05:34:53 +0000 (0:00:00.371) 0:02:55.265 ***** 2026-02-14 05:35:00.804460 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:35:00.804467 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:35:00.804474 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:35:00.804481 | orchestrator | 2026-02-14 05:35:00.804489 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-14 05:35:00.804496 | orchestrator | Saturday 14 February 2026 05:34:54 +0000 (0:00:00.358) 0:02:55.623 ***** 2026-02-14 05:35:00.804503 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804510 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804518 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804547 | orchestrator | 2026-02-14 05:35:00.804554 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-14 05:35:00.804561 | orchestrator | Saturday 14 February 2026 05:34:55 +0000 (0:00:00.759) 0:02:56.383 ***** 2026-02-14 05:35:00.804568 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804575 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804582 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804589 | orchestrator | 2026-02-14 05:35:00.804597 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-14 05:35:00.804604 | orchestrator | Saturday 14 February 2026 05:34:55 +0000 (0:00:00.587) 0:02:56.971 ***** 2026-02-14 05:35:00.804610 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804616 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804622 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804628 | orchestrator | 2026-02-14 05:35:00.804634 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-14 05:35:00.804640 | orchestrator | Saturday 14 February 2026 05:34:56 +0000 (0:00:00.817) 0:02:57.788 ***** 2026-02-14 05:35:00.804646 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804652 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804658 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804664 | orchestrator | 2026-02-14 05:35:00.804671 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-14 05:35:00.804677 | orchestrator | Saturday 14 February 2026 05:34:56 +0000 (0:00:00.360) 0:02:58.149 ***** 2026-02-14 05:35:00.804683 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:35:00.804689 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:35:00.804695 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:35:00.804701 | orchestrator | 2026-02-14 05:35:00.804707 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-14 05:35:00.804713 | orchestrator | Saturday 14 February 2026 05:34:57 +0000 (0:00:00.353) 0:02:58.502 ***** 2026-02-14 05:35:00.804719 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:35:00.804725 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:35:00.804734 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:35:00.804741 | orchestrator | 2026-02-14 05:35:00.804747 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-14 05:35:00.804753 | orchestrator | Saturday 14 February 2026 05:34:57 +0000 (0:00:00.554) 0:02:59.057 ***** 2026-02-14 05:35:00.804759 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804766 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804772 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804778 | orchestrator | 2026-02-14 05:35:00.804784 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-14 05:35:00.804790 | orchestrator | Saturday 14 February 2026 05:34:58 +0000 (0:00:00.820) 0:02:59.878 ***** 2026-02-14 05:35:00.804796 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804807 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804813 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804819 | orchestrator | 2026-02-14 05:35:00.804825 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-14 05:35:00.804831 | orchestrator | Saturday 14 February 2026 05:34:58 +0000 (0:00:00.388) 0:03:00.266 ***** 2026-02-14 05:35:00.804837 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804843 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804849 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804855 | orchestrator | 2026-02-14 05:35:00.804862 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-14 05:35:00.804868 | orchestrator | Saturday 14 February 2026 05:35:00 +0000 (0:00:01.086) 0:03:01.353 ***** 2026-02-14 05:35:00.804874 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:35:00.804880 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:35:00.804886 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:35:00.804892 | orchestrator | 2026-02-14 05:35:00.804898 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-14 05:35:00.804904 | orchestrator | Saturday 14 February 2026 05:35:00 +0000 (0:00:00.395) 0:03:01.748 ***** 2026-02-14 05:35:00.804910 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:35:00.804917 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:35:00.804923 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:35:00.804929 | orchestrator | 2026-02-14 05:35:00.804939 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-14 05:35:09.810404 | orchestrator | Saturday 14 February 2026 05:35:00 +0000 (0:00:00.366) 0:03:02.115 ***** 2026-02-14 05:35:09.810518 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:35:09.810592 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:35:09.810612 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:35:09.810628 | orchestrator | 2026-02-14 05:35:09.810640 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-14 05:35:09.810652 | orchestrator | Saturday 14 February 2026 05:35:01 +0000 (0:00:00.721) 0:03:02.836 ***** 2026-02-14 05:35:09.810666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810681 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810693 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810725 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810778 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810824 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:09.810849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:09.810872 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:09.810906 | orchestrator | 2026-02-14 05:35:09.810920 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-14 05:35:09.810939 | orchestrator | Saturday 14 February 2026 05:35:04 +0000 (0:00:03.038) 0:03:05.875 ***** 2026-02-14 05:35:09.810953 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810966 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:09.810990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.030960 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031091 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031125 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:20.031164 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:20.031207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:20.031232 | orchestrator | 2026-02-14 05:35:20.031245 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-14 05:35:20.031258 | orchestrator | Saturday 14 February 2026 05:35:09 +0000 (0:00:05.249) 0:03:11.125 ***** 2026-02-14 05:35:20.031270 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-14 05:35:20.031281 | orchestrator | 2026-02-14 05:35:20.031292 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-14 05:35:20.031303 | orchestrator | Saturday 14 February 2026 05:35:10 +0000 (0:00:00.995) 0:03:12.120 ***** 2026-02-14 05:35:20.031326 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:35:20.031337 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:35:20.031348 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:35:20.031359 | orchestrator | 2026-02-14 05:35:20.031369 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-14 05:35:20.031380 | orchestrator | Saturday 14 February 2026 05:35:11 +0000 (0:00:00.967) 0:03:13.087 ***** 2026-02-14 05:35:20.031391 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:35:20.031402 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:35:20.031412 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:35:20.031423 | orchestrator | 2026-02-14 05:35:20.031434 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-14 05:35:20.031444 | orchestrator | Saturday 14 February 2026 05:35:13 +0000 (0:00:01.622) 0:03:14.709 ***** 2026-02-14 05:35:20.031456 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:35:20.031467 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:35:20.031478 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:35:20.031491 | orchestrator | 2026-02-14 05:35:20.031503 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-14 05:35:20.031516 | orchestrator | Saturday 14 February 2026 05:35:15 +0000 (0:00:01.931) 0:03:16.641 ***** 2026-02-14 05:35:20.031566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:20.031631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:22.880741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:22.880845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:22.880857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.880881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:22.880889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.880897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:35:22.880904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.880913 | orchestrator | 2026-02-14 05:35:22.880922 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-14 05:35:22.880957 | orchestrator | Saturday 14 February 2026 05:35:20 +0000 (0:00:04.696) 0:03:21.337 ***** 2026-02-14 05:35:22.880966 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:35:22.880974 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:35:22.880982 | orchestrator | } 2026-02-14 05:35:22.880989 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:35:22.880996 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:35:22.881003 | orchestrator | } 2026-02-14 05:35:22.881010 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:35:22.881017 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:35:22.881024 | orchestrator | } 2026-02-14 05:35:22.881031 | orchestrator | 2026-02-14 05:35:22.881052 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-14 05:35:22.881060 | orchestrator | Saturday 14 February 2026 05:35:20 +0000 (0:00:00.416) 0:03:21.753 ***** 2026-02-14 05:35:22.881068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:35:22.881139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:36:37.252434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-14 05:36:37.252631 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-14 05:36:37.252657 | orchestrator | 2026-02-14 05:36:37.252671 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-14 05:36:37.252684 | orchestrator | Saturday 14 February 2026 05:35:22 +0000 (0:00:02.435) 0:03:24.189 ***** 2026-02-14 05:36:37.252696 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-14 05:36:37.252708 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-14 05:36:37.252736 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-14 05:36:37.252747 | orchestrator | 2026-02-14 05:36:37.252759 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-14 05:36:37.252771 | orchestrator | Saturday 14 February 2026 05:35:24 +0000 (0:00:01.211) 0:03:25.400 ***** 2026-02-14 05:36:37.252782 | orchestrator | changed: [testbed-node-0] => { 2026-02-14 05:36:37.252794 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:36:37.252805 | orchestrator | } 2026-02-14 05:36:37.252816 | orchestrator | changed: [testbed-node-1] => { 2026-02-14 05:36:37.252827 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:36:37.252837 | orchestrator | } 2026-02-14 05:36:37.252848 | orchestrator | changed: [testbed-node-2] => { 2026-02-14 05:36:37.252859 | orchestrator |  "msg": "Notifying handlers" 2026-02-14 05:36:37.252869 | orchestrator | } 2026-02-14 05:36:37.252880 | orchestrator | 2026-02-14 05:36:37.252895 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 05:36:37.252946 | orchestrator | Saturday 14 February 2026 05:35:24 +0000 (0:00:00.580) 0:03:25.981 ***** 2026-02-14 05:36:37.252968 | orchestrator | 2026-02-14 05:36:37.252985 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 05:36:37.253003 | orchestrator | Saturday 14 February 2026 05:35:24 +0000 (0:00:00.075) 0:03:26.056 ***** 2026-02-14 05:36:37.253022 | orchestrator | 2026-02-14 05:36:37.253041 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-14 05:36:37.253060 | orchestrator | Saturday 14 February 2026 05:35:24 +0000 (0:00:00.074) 0:03:26.130 ***** 2026-02-14 05:36:37.253073 | orchestrator | 2026-02-14 05:36:37.253086 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-14 05:36:37.253098 | orchestrator | Saturday 14 February 2026 05:35:24 +0000 (0:00:00.074) 0:03:26.204 ***** 2026-02-14 05:36:37.253110 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:36:37.253122 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:36:37.253134 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:36:37.253146 | orchestrator | 2026-02-14 05:36:37.253158 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-14 05:36:37.253170 | orchestrator | Saturday 14 February 2026 05:35:40 +0000 (0:00:15.755) 0:03:41.960 ***** 2026-02-14 05:36:37.253182 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:36:37.253195 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:36:37.253207 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:36:37.253219 | orchestrator | 2026-02-14 05:36:37.253231 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-14 05:36:37.253244 | orchestrator | Saturday 14 February 2026 05:35:56 +0000 (0:00:16.061) 0:03:58.021 ***** 2026-02-14 05:36:37.253256 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-14 05:36:37.253268 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-14 05:36:37.253281 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-14 05:36:37.253292 | orchestrator | 2026-02-14 05:36:37.253303 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-14 05:36:37.253314 | orchestrator | Saturday 14 February 2026 05:36:07 +0000 (0:00:10.712) 0:04:08.734 ***** 2026-02-14 05:36:37.253325 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:36:37.253335 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:36:37.253346 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:36:37.253357 | orchestrator | 2026-02-14 05:36:37.253443 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-14 05:36:37.253455 | orchestrator | Saturday 14 February 2026 05:36:24 +0000 (0:00:16.645) 0:04:25.380 ***** 2026-02-14 05:36:37.253466 | orchestrator | Pausing for 5 seconds 2026-02-14 05:36:37.253477 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:36:37.253488 | orchestrator | 2026-02-14 05:36:37.253498 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-14 05:36:37.253509 | orchestrator | Saturday 14 February 2026 05:36:29 +0000 (0:00:05.198) 0:04:30.578 ***** 2026-02-14 05:36:37.253520 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:36:37.253531 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:36:37.253541 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:36:37.253552 | orchestrator | 2026-02-14 05:36:37.253589 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-14 05:36:37.253622 | orchestrator | Saturday 14 February 2026 05:36:30 +0000 (0:00:00.873) 0:04:31.452 ***** 2026-02-14 05:36:37.253634 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:36:37.253644 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:36:37.253673 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:36:37.253684 | orchestrator | 2026-02-14 05:36:37.253694 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-14 05:36:37.253705 | orchestrator | Saturday 14 February 2026 05:36:30 +0000 (0:00:00.720) 0:04:32.173 ***** 2026-02-14 05:36:37.253716 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:36:37.253726 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:36:37.253748 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:36:37.253759 | orchestrator | 2026-02-14 05:36:37.253770 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-14 05:36:37.253780 | orchestrator | Saturday 14 February 2026 05:36:31 +0000 (0:00:00.849) 0:04:33.022 ***** 2026-02-14 05:36:37.253791 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:36:37.253802 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:36:37.253812 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:36:37.253823 | orchestrator | 2026-02-14 05:36:37.253833 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-14 05:36:37.253844 | orchestrator | Saturday 14 February 2026 05:36:32 +0000 (0:00:01.038) 0:04:34.060 ***** 2026-02-14 05:36:37.253854 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:36:37.253865 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:36:37.253876 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:36:37.253886 | orchestrator | 2026-02-14 05:36:37.253897 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-14 05:36:37.253907 | orchestrator | Saturday 14 February 2026 05:36:33 +0000 (0:00:00.832) 0:04:34.893 ***** 2026-02-14 05:36:37.253918 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:36:37.253928 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:36:37.253939 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:36:37.253949 | orchestrator | 2026-02-14 05:36:37.253960 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-14 05:36:37.253971 | orchestrator | Saturday 14 February 2026 05:36:34 +0000 (0:00:00.828) 0:04:35.721 ***** 2026-02-14 05:36:37.253989 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-14 05:36:37.254000 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-14 05:36:37.254011 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-14 05:36:37.254088 | orchestrator | 2026-02-14 05:36:37.254108 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 05:36:37.254130 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 05:36:37.254150 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-14 05:36:37.254170 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-14 05:36:37.254190 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:36:37.254210 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:36:37.254228 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 05:36:37.254247 | orchestrator | 2026-02-14 05:36:37.254265 | orchestrator | 2026-02-14 05:36:37.254286 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 05:36:37.254306 | orchestrator | Saturday 14 February 2026 05:36:37 +0000 (0:00:02.827) 0:04:38.549 ***** 2026-02-14 05:36:37.254325 | orchestrator | =============================================================================== 2026-02-14 05:36:37.254344 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 130.87s 2026-02-14 05:36:37.254364 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.88s 2026-02-14 05:36:37.254384 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.65s 2026-02-14 05:36:37.254405 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.06s 2026-02-14 05:36:37.254424 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.76s 2026-02-14 05:36:37.254444 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 10.71s 2026-02-14 05:36:37.254481 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.25s 2026-02-14 05:36:37.254495 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.20s 2026-02-14 05:36:37.254506 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.70s 2026-02-14 05:36:37.254516 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.04s 2026-02-14 05:36:37.254527 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.83s 2026-02-14 05:36:37.254537 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.47s 2026-02-14 05:36:37.254548 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.44s 2026-02-14 05:36:37.254608 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.07s 2026-02-14 05:36:37.254621 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.01s 2026-02-14 05:36:37.254632 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.93s 2026-02-14 05:36:37.254655 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.90s 2026-02-14 05:36:37.809062 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.66s 2026-02-14 05:36:37.809160 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.62s 2026-02-14 05:36:37.809174 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.50s 2026-02-14 05:36:38.356428 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-14 05:36:38.356536 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-14 05:36:38.356553 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-14 05:36:38.363333 | orchestrator | + set -e 2026-02-14 05:36:38.363408 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 05:36:38.363423 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 05:36:38.363435 | orchestrator | ++ INTERACTIVE=false 2026-02-14 05:36:38.363446 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 05:36:38.363456 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 05:36:38.363467 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-14 05:36:40.619807 | orchestrator | 2026-02-14 05:36:40 | INFO  | Task a6d21dc4-08f7-4558-ac0d-73bbf322a89b (ceph-rolling_update) was prepared for execution. 2026-02-14 05:36:40.619888 | orchestrator | 2026-02-14 05:36:40 | INFO  | It takes a moment until task a6d21dc4-08f7-4558-ac0d-73bbf322a89b (ceph-rolling_update) has been started and output is visible here. 2026-02-14 05:38:07.132664 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-14 05:38:07.132782 | orchestrator | 2.16.14 2026-02-14 05:38:07.132798 | orchestrator | 2026-02-14 05:38:07.132810 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-14 05:38:07.132821 | orchestrator | 2026-02-14 05:38:07.132831 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-14 05:38:07.132841 | orchestrator | Saturday 14 February 2026 05:36:49 +0000 (0:00:01.858) 0:00:01.858 ***** 2026-02-14 05:38:07.132867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-14 05:38:07.132877 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-14 05:38:07.132888 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-14 05:38:07.132906 | orchestrator | skipping: [localhost] 2026-02-14 05:38:07.132924 | orchestrator | 2026-02-14 05:38:07.132941 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-14 05:38:07.132960 | orchestrator | 2026-02-14 05:38:07.132978 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-14 05:38:07.132992 | orchestrator | Saturday 14 February 2026 05:36:51 +0000 (0:00:01.986) 0:00:03.845 ***** 2026-02-14 05:38:07.133001 | orchestrator | ok: [testbed-node-0] => { 2026-02-14 05:38:07.133011 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133048 | orchestrator | } 2026-02-14 05:38:07.133059 | orchestrator | ok: [testbed-node-1] => { 2026-02-14 05:38:07.133069 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133078 | orchestrator | } 2026-02-14 05:38:07.133170 | orchestrator | ok: [testbed-node-2] => { 2026-02-14 05:38:07.133184 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133195 | orchestrator | } 2026-02-14 05:38:07.133206 | orchestrator | ok: [testbed-node-3] => { 2026-02-14 05:38:07.133218 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133229 | orchestrator | } 2026-02-14 05:38:07.133240 | orchestrator | ok: [testbed-node-4] => { 2026-02-14 05:38:07.133252 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133263 | orchestrator | } 2026-02-14 05:38:07.133274 | orchestrator | ok: [testbed-node-5] => { 2026-02-14 05:38:07.133285 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133296 | orchestrator | } 2026-02-14 05:38:07.133307 | orchestrator | ok: [testbed-manager] => { 2026-02-14 05:38:07.133319 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-14 05:38:07.133330 | orchestrator | } 2026-02-14 05:38:07.133342 | orchestrator | 2026-02-14 05:38:07.133353 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-14 05:38:07.133364 | orchestrator | Saturday 14 February 2026 05:36:57 +0000 (0:00:06.378) 0:00:10.224 ***** 2026-02-14 05:38:07.133375 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:07.133387 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:07.133398 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:07.133410 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:07.133421 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:07.133432 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:07.133443 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.133454 | orchestrator | 2026-02-14 05:38:07.133466 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-14 05:38:07.133477 | orchestrator | Saturday 14 February 2026 05:37:03 +0000 (0:00:05.872) 0:00:16.096 ***** 2026-02-14 05:38:07.133488 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:38:07.133499 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:38:07.133511 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:38:07.133522 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:38:07.133534 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:38:07.133545 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:38:07.133555 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:38:07.133564 | orchestrator | 2026-02-14 05:38:07.133574 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-14 05:38:07.133626 | orchestrator | Saturday 14 February 2026 05:37:36 +0000 (0:00:32.418) 0:00:48.515 ***** 2026-02-14 05:38:07.133637 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.133647 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.133657 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.133666 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.133676 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.133685 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.133696 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.133705 | orchestrator | 2026-02-14 05:38:07.133715 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:38:07.133724 | orchestrator | Saturday 14 February 2026 05:37:38 +0000 (0:00:02.240) 0:00:50.755 ***** 2026-02-14 05:38:07.133745 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-14 05:38:07.133756 | orchestrator | 2026-02-14 05:38:07.133766 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 05:38:07.133776 | orchestrator | Saturday 14 February 2026 05:37:41 +0000 (0:00:02.853) 0:00:53.608 ***** 2026-02-14 05:38:07.133786 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.133795 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.133805 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.133814 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.133823 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.133833 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.133842 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.133852 | orchestrator | 2026-02-14 05:38:07.133881 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 05:38:07.133893 | orchestrator | Saturday 14 February 2026 05:37:43 +0000 (0:00:02.472) 0:00:56.081 ***** 2026-02-14 05:38:07.133902 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.133912 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.133921 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.133931 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.133940 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.133950 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.133960 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.133969 | orchestrator | 2026-02-14 05:38:07.133987 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 05:38:07.133996 | orchestrator | Saturday 14 February 2026 05:37:45 +0000 (0:00:02.089) 0:00:58.170 ***** 2026-02-14 05:38:07.134006 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134073 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134084 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134094 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134103 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134146 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134155 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134165 | orchestrator | 2026-02-14 05:38:07.134175 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 05:38:07.134184 | orchestrator | Saturday 14 February 2026 05:37:48 +0000 (0:00:02.524) 0:01:00.695 ***** 2026-02-14 05:38:07.134194 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134204 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134213 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134222 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134232 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134241 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134251 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134261 | orchestrator | 2026-02-14 05:38:07.134270 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 05:38:07.134280 | orchestrator | Saturday 14 February 2026 05:37:50 +0000 (0:00:01.865) 0:01:02.561 ***** 2026-02-14 05:38:07.134289 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134299 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134308 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134318 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134327 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134337 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134346 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134356 | orchestrator | 2026-02-14 05:38:07.134366 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 05:38:07.134375 | orchestrator | Saturday 14 February 2026 05:37:52 +0000 (0:00:02.130) 0:01:04.692 ***** 2026-02-14 05:38:07.134385 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134394 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134404 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134413 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134423 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134440 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134449 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134459 | orchestrator | 2026-02-14 05:38:07.134469 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 05:38:07.134478 | orchestrator | Saturday 14 February 2026 05:37:54 +0000 (0:00:02.019) 0:01:06.711 ***** 2026-02-14 05:38:07.134488 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:07.134497 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:07.134507 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:07.134517 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:07.134526 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:07.134536 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:07.134545 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:07.134555 | orchestrator | 2026-02-14 05:38:07.134564 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 05:38:07.134574 | orchestrator | Saturday 14 February 2026 05:37:56 +0000 (0:00:02.155) 0:01:08.866 ***** 2026-02-14 05:38:07.134601 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134612 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134622 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134631 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134641 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134650 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134660 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134670 | orchestrator | 2026-02-14 05:38:07.134680 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 05:38:07.134689 | orchestrator | Saturday 14 February 2026 05:37:58 +0000 (0:00:02.135) 0:01:11.001 ***** 2026-02-14 05:38:07.134699 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:38:07.134709 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:38:07.134718 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:38:07.134728 | orchestrator | 2026-02-14 05:38:07.134737 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 05:38:07.134747 | orchestrator | Saturday 14 February 2026 05:38:00 +0000 (0:00:01.688) 0:01:12.690 ***** 2026-02-14 05:38:07.134756 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:07.134766 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:07.134776 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:07.134785 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:07.134795 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:07.134804 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:07.134813 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:07.134823 | orchestrator | 2026-02-14 05:38:07.134833 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 05:38:07.134843 | orchestrator | Saturday 14 February 2026 05:38:02 +0000 (0:00:02.038) 0:01:14.728 ***** 2026-02-14 05:38:07.134852 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:38:07.134862 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:38:07.134871 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:38:07.134881 | orchestrator | 2026-02-14 05:38:07.134890 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 05:38:07.134900 | orchestrator | Saturday 14 February 2026 05:38:05 +0000 (0:00:03.316) 0:01:18.045 ***** 2026-02-14 05:38:07.134918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:38:29.439452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:38:29.439574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:38:29.439638 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.439653 | orchestrator | 2026-02-14 05:38:29.439666 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 05:38:29.439700 | orchestrator | Saturday 14 February 2026 05:38:07 +0000 (0:00:01.402) 0:01:19.447 ***** 2026-02-14 05:38:29.439714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439857 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.439868 | orchestrator | 2026-02-14 05:38:29.439879 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 05:38:29.439891 | orchestrator | Saturday 14 February 2026 05:38:09 +0000 (0:00:01.920) 0:01:21.368 ***** 2026-02-14 05:38:29.439904 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439918 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:29.439941 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.439952 | orchestrator | 2026-02-14 05:38:29.439963 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 05:38:29.439974 | orchestrator | Saturday 14 February 2026 05:38:10 +0000 (0:00:01.168) 0:01:22.537 ***** 2026-02-14 05:38:29.439989 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '775cd2ba237c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 05:38:03.075490', 'end': '2026-02-14 05:38:03.127851', 'delta': '0:00:00.052361', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['775cd2ba237c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 05:38:29.440030 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26dcb1313f5c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 05:38:03.952998', 'end': '2026-02-14 05:38:04.000807', 'delta': '0:00:00.047809', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26dcb1313f5c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 05:38:29.440058 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7aff8e7c54ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 05:38:04.499680', 'end': '2026-02-14 05:38:04.540016', 'delta': '0:00:00.040336', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7aff8e7c54ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 05:38:29.440070 | orchestrator | 2026-02-14 05:38:29.440081 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 05:38:29.440093 | orchestrator | Saturday 14 February 2026 05:38:11 +0000 (0:00:01.227) 0:01:23.764 ***** 2026-02-14 05:38:29.440104 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:29.440116 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:29.440127 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:29.440137 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:29.440148 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:29.440158 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:29.440169 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:29.440179 | orchestrator | 2026-02-14 05:38:29.440190 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 05:38:29.440201 | orchestrator | Saturday 14 February 2026 05:38:13 +0000 (0:00:02.283) 0:01:26.047 ***** 2026-02-14 05:38:29.440212 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.440222 | orchestrator | 2026-02-14 05:38:29.440233 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 05:38:29.440243 | orchestrator | Saturday 14 February 2026 05:38:14 +0000 (0:00:01.243) 0:01:27.291 ***** 2026-02-14 05:38:29.440254 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:29.440265 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:29.440275 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:29.440286 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:29.440296 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:29.440307 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:29.440317 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:29.440328 | orchestrator | 2026-02-14 05:38:29.440339 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 05:38:29.440349 | orchestrator | Saturday 14 February 2026 05:38:17 +0000 (0:00:02.185) 0:01:29.477 ***** 2026-02-14 05:38:29.440360 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:29.440370 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440381 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440391 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440402 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440412 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-14 05:38:29.440434 | orchestrator | 2026-02-14 05:38:29.440444 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:38:29.440455 | orchestrator | Saturday 14 February 2026 05:38:20 +0000 (0:00:03.223) 0:01:32.700 ***** 2026-02-14 05:38:29.440466 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:29.440477 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:29.440493 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:29.440504 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:29.440515 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:29.440526 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:29.440536 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:29.440547 | orchestrator | 2026-02-14 05:38:29.440558 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 05:38:29.440568 | orchestrator | Saturday 14 February 2026 05:38:22 +0000 (0:00:02.263) 0:01:34.963 ***** 2026-02-14 05:38:29.440579 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.440645 | orchestrator | 2026-02-14 05:38:29.440672 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 05:38:29.440694 | orchestrator | Saturday 14 February 2026 05:38:23 +0000 (0:00:01.120) 0:01:36.084 ***** 2026-02-14 05:38:29.440716 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.440738 | orchestrator | 2026-02-14 05:38:29.440750 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:38:29.440760 | orchestrator | Saturday 14 February 2026 05:38:25 +0000 (0:00:01.261) 0:01:37.345 ***** 2026-02-14 05:38:29.440771 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.440782 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:29.440792 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:29.440803 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:29.440814 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:29.440824 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:29.440835 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:29.440845 | orchestrator | 2026-02-14 05:38:29.440856 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 05:38:29.440867 | orchestrator | Saturday 14 February 2026 05:38:27 +0000 (0:00:02.419) 0:01:39.764 ***** 2026-02-14 05:38:29.440878 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:29.440888 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:29.440899 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:29.440910 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:29.440920 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:29.440931 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:29.440949 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.281920 | orchestrator | 2026-02-14 05:38:40.282011 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 05:38:40.282045 | orchestrator | Saturday 14 February 2026 05:38:29 +0000 (0:00:01.991) 0:01:41.756 ***** 2026-02-14 05:38:40.282051 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.282056 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.282062 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.282078 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:40.282083 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:40.282096 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:40.282165 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.282172 | orchestrator | 2026-02-14 05:38:40.282177 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 05:38:40.282182 | orchestrator | Saturday 14 February 2026 05:38:31 +0000 (0:00:02.238) 0:01:43.994 ***** 2026-02-14 05:38:40.282187 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.282193 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.282198 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.282203 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:40.282207 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:40.282212 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:40.282216 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.282221 | orchestrator | 2026-02-14 05:38:40.282226 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 05:38:40.282230 | orchestrator | Saturday 14 February 2026 05:38:33 +0000 (0:00:02.060) 0:01:46.055 ***** 2026-02-14 05:38:40.282250 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.282254 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.282259 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.282264 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:40.282268 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:40.282273 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:40.282277 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.282282 | orchestrator | 2026-02-14 05:38:40.282286 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 05:38:40.282291 | orchestrator | Saturday 14 February 2026 05:38:35 +0000 (0:00:02.257) 0:01:48.313 ***** 2026-02-14 05:38:40.282295 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.282300 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.282304 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.282309 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:40.282313 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:40.282318 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:40.282322 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.282327 | orchestrator | 2026-02-14 05:38:40.282332 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 05:38:40.282337 | orchestrator | Saturday 14 February 2026 05:38:37 +0000 (0:00:01.971) 0:01:50.284 ***** 2026-02-14 05:38:40.282341 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.282346 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.282350 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.282355 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:40.282359 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:40.282364 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:40.282368 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:40.282373 | orchestrator | 2026-02-14 05:38:40.282378 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 05:38:40.282382 | orchestrator | Saturday 14 February 2026 05:38:40 +0000 (0:00:02.163) 0:01:52.448 ***** 2026-02-14 05:38:40.282388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:40.282432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:40.282455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.282464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.539826 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:40.539936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.539955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.539966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.539978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:40.539990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:40.540085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.540142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:40.540164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:40.820688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820698 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:40.820725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}})  2026-02-14 05:38:40.820738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:40.820744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}})  2026-02-14 05:38:40.820750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820758 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:40.820763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.820768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:40.820781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}})  2026-02-14 05:38:40.876337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}})  2026-02-14 05:38:40.876379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:40.876443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:40.876498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}})  2026-02-14 05:38:40.876523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:41.048236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}})  2026-02-14 05:38:41.048328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:41.048388 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:41.048399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}})  2026-02-14 05:38:41.048471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}})  2026-02-14 05:38:41.048481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.048493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:41.048526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}})  2026-02-14 05:38:41.212833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:41.212854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}})  2026-02-14 05:38:41.212894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.212961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:41.212980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.213012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:41.213033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.213053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}})  2026-02-14 05:38:41.213081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}})  2026-02-14 05:38:41.213114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.397832 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:41.397928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:41.397970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.397985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398076 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:41.398088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398152 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:38:41.398164 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398175 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398187 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:41.398222 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4d2ac2a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:38:42.853446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:42.853550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:38:42.853568 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:42.853582 | orchestrator | 2026-02-14 05:38:42.853640 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 05:38:42.853655 | orchestrator | Saturday 14 February 2026 05:38:42 +0000 (0:00:02.562) 0:01:55.010 ***** 2026-02-14 05:38:42.853669 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853684 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853727 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:42.853858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082356 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082463 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:43.082483 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082497 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082525 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082538 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082574 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082712 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082765 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082805 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.082829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419402 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:43.419498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419515 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419554 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419585 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419642 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419673 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419695 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419717 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.419737 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:43.419755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.658427 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.846968 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853499 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.853682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:43.950999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034356 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:44.034448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034462 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:44.034473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034546 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034560 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034570 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034619 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034634 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:44.034650 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456383 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456422 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456467 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd4d2ac2a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4d2ac2a-1f55-4b1e-a4ba-c2f19de49bc7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456493 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456511 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:57.456522 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:38:57.456533 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:57.456543 | orchestrator | 2026-02-14 05:38:57.456553 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 05:38:57.456565 | orchestrator | Saturday 14 February 2026 05:38:45 +0000 (0:00:02.889) 0:01:57.900 ***** 2026-02-14 05:38:57.456574 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:57.456585 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:57.456595 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:57.456634 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:57.456644 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:57.456653 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:57.456663 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:57.456672 | orchestrator | 2026-02-14 05:38:57.456682 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 05:38:57.456691 | orchestrator | Saturday 14 February 2026 05:38:48 +0000 (0:00:02.602) 0:02:00.502 ***** 2026-02-14 05:38:57.456701 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:57.456711 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:57.456720 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:57.456735 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:57.456744 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:57.456754 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:57.456763 | orchestrator | ok: [testbed-manager] 2026-02-14 05:38:57.456772 | orchestrator | 2026-02-14 05:38:57.456783 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:38:57.456795 | orchestrator | Saturday 14 February 2026 05:38:50 +0000 (0:00:01.941) 0:02:02.444 ***** 2026-02-14 05:38:57.456805 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:38:57.456816 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:38:57.456827 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:38:57.456838 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:38:57.456849 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:57.456859 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:38:57.456870 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:38:57.456881 | orchestrator | 2026-02-14 05:38:57.456893 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:38:57.456904 | orchestrator | Saturday 14 February 2026 05:38:52 +0000 (0:00:02.563) 0:02:05.007 ***** 2026-02-14 05:38:57.456915 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:57.456924 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:57.456934 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:57.456943 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:57.456952 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:38:57.456962 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:38:57.456971 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:38:57.456981 | orchestrator | 2026-02-14 05:38:57.456990 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:38:57.457006 | orchestrator | Saturday 14 February 2026 05:38:54 +0000 (0:00:02.002) 0:02:07.010 ***** 2026-02-14 05:38:57.457016 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:38:57.457025 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:38:57.457035 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:38:57.457044 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:38:57.457061 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882163 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882240 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-14 05:39:26.882246 | orchestrator | 2026-02-14 05:39:26.882251 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:39:26.882257 | orchestrator | Saturday 14 February 2026 05:38:57 +0000 (0:00:02.754) 0:02:09.764 ***** 2026-02-14 05:39:26.882261 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:39:26.882265 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:39:26.882269 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:39:26.882273 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882277 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882281 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882285 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:39:26.882288 | orchestrator | 2026-02-14 05:39:26.882292 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:39:26.882296 | orchestrator | Saturday 14 February 2026 05:38:59 +0000 (0:00:01.982) 0:02:11.749 ***** 2026-02-14 05:39:26.882301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:39:26.882305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-14 05:39:26.882309 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:39:26.882312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 05:39:26.882316 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-14 05:39:26.882320 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-14 05:39:26.882323 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 05:39:26.882327 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 05:39:26.882331 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-14 05:39:26.882334 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 05:39:26.882338 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 05:39:26.882342 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:39:26.882345 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 05:39:26.882349 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 05:39:26.882353 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-14 05:39:26.882356 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 05:39:26.882360 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 05:39:26.882364 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-14 05:39:26.882367 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 05:39:26.882371 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 05:39:26.882375 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-14 05:39:26.882378 | orchestrator | 2026-02-14 05:39:26.882382 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:39:26.882386 | orchestrator | Saturday 14 February 2026 05:39:02 +0000 (0:00:03.577) 0:02:15.326 ***** 2026-02-14 05:39:26.882389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:39:26.882394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:39:26.882397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:39:26.882401 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:39:26.882405 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:39:26.882408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:39:26.882430 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:39:26.882434 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:39:26.882438 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:39:26.882442 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:39:26.882445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:39:26.882449 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:39:26.882453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 05:39:26.882457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 05:39:26.882460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 05:39:26.882464 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 05:39:26.882471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 05:39:26.882475 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 05:39:26.882479 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 05:39:26.882486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 05:39:26.882489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 05:39:26.882493 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882497 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-14 05:39:26.882500 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-14 05:39:26.882504 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-14 05:39:26.882508 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:39:26.882512 | orchestrator | 2026-02-14 05:39:26.882515 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 05:39:26.882519 | orchestrator | Saturday 14 February 2026 05:39:05 +0000 (0:00:02.329) 0:02:17.656 ***** 2026-02-14 05:39:26.882523 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:39:26.882526 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:39:26.882530 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:39:26.882534 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:39:26.882547 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:39:26.882552 | orchestrator | 2026-02-14 05:39:26.882555 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:39:26.882560 | orchestrator | Saturday 14 February 2026 05:39:07 +0000 (0:00:02.061) 0:02:19.717 ***** 2026-02-14 05:39:26.882564 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882568 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882571 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882575 | orchestrator | 2026-02-14 05:39:26.882579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:39:26.882583 | orchestrator | Saturday 14 February 2026 05:39:09 +0000 (0:00:01.786) 0:02:21.504 ***** 2026-02-14 05:39:26.882679 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882687 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882691 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882694 | orchestrator | 2026-02-14 05:39:26.882698 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:39:26.882702 | orchestrator | Saturday 14 February 2026 05:39:10 +0000 (0:00:01.429) 0:02:22.934 ***** 2026-02-14 05:39:26.882705 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882709 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:39:26.882713 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:39:26.882716 | orchestrator | 2026-02-14 05:39:26.882720 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:39:26.882728 | orchestrator | Saturday 14 February 2026 05:39:11 +0000 (0:00:01.353) 0:02:24.287 ***** 2026-02-14 05:39:26.882731 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:39:26.882736 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:39:26.882740 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:39:26.882745 | orchestrator | 2026-02-14 05:39:26.882749 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:39:26.882753 | orchestrator | Saturday 14 February 2026 05:39:13 +0000 (0:00:01.475) 0:02:25.763 ***** 2026-02-14 05:39:26.882764 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 05:39:26.882769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 05:39:26.882773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 05:39:26.882778 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882782 | orchestrator | 2026-02-14 05:39:26.882786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:39:26.882796 | orchestrator | Saturday 14 February 2026 05:39:15 +0000 (0:00:01.681) 0:02:27.444 ***** 2026-02-14 05:39:26.882801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 05:39:26.882805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 05:39:26.882809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 05:39:26.882813 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882817 | orchestrator | 2026-02-14 05:39:26.882821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:39:26.882825 | orchestrator | Saturday 14 February 2026 05:39:16 +0000 (0:00:01.733) 0:02:29.178 ***** 2026-02-14 05:39:26.882829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 05:39:26.882832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 05:39:26.882836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 05:39:26.882840 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:39:26.882844 | orchestrator | 2026-02-14 05:39:26.882847 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:39:26.882851 | orchestrator | Saturday 14 February 2026 05:39:18 +0000 (0:00:01.721) 0:02:30.900 ***** 2026-02-14 05:39:26.882855 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:39:26.882859 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:39:26.882862 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:39:26.882866 | orchestrator | 2026-02-14 05:39:26.882870 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:39:26.882874 | orchestrator | Saturday 14 February 2026 05:39:19 +0000 (0:00:01.382) 0:02:32.282 ***** 2026-02-14 05:39:26.882880 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 05:39:26.882883 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 05:39:26.882887 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 05:39:26.882891 | orchestrator | 2026-02-14 05:39:26.882895 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 05:39:26.882899 | orchestrator | Saturday 14 February 2026 05:39:21 +0000 (0:00:01.604) 0:02:33.887 ***** 2026-02-14 05:39:26.882902 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:39:26.882906 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:39:26.882910 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:39:26.882914 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:39:26.882918 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:39:26.882922 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:39:26.882925 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:39:26.882932 | orchestrator | 2026-02-14 05:39:26.882936 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 05:39:26.882939 | orchestrator | Saturday 14 February 2026 05:39:23 +0000 (0:00:02.157) 0:02:36.045 ***** 2026-02-14 05:39:26.882943 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:39:26.882947 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:39:26.882951 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:39:26.882958 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:40:14.226512 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:40:14.226685 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:40:14.226704 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:40:14.226716 | orchestrator | 2026-02-14 05:40:14.226729 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-14 05:40:14.226741 | orchestrator | Saturday 14 February 2026 05:39:26 +0000 (0:00:03.149) 0:02:39.195 ***** 2026-02-14 05:40:14.226752 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:40:14.226764 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:40:14.226774 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:40:14.226785 | orchestrator | changed: [testbed-manager] 2026-02-14 05:40:14.226795 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:40:14.226806 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:40:14.226817 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:40:14.226828 | orchestrator | 2026-02-14 05:40:14.226839 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-14 05:40:14.226849 | orchestrator | Saturday 14 February 2026 05:39:37 +0000 (0:00:11.018) 0:02:50.213 ***** 2026-02-14 05:40:14.226860 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.226872 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.226883 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.226894 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.226904 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.226915 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.226926 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.226936 | orchestrator | 2026-02-14 05:40:14.226947 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-14 05:40:14.226958 | orchestrator | Saturday 14 February 2026 05:39:40 +0000 (0:00:02.214) 0:02:52.428 ***** 2026-02-14 05:40:14.226969 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.226979 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.226990 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227000 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227011 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227021 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227032 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227044 | orchestrator | 2026-02-14 05:40:14.227058 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-14 05:40:14.227070 | orchestrator | Saturday 14 February 2026 05:39:41 +0000 (0:00:01.860) 0:02:54.289 ***** 2026-02-14 05:40:14.227083 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227095 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:40:14.227106 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:40:14.227118 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:40:14.227130 | orchestrator | changed: [testbed-node-3] 2026-02-14 05:40:14.227143 | orchestrator | changed: [testbed-node-4] 2026-02-14 05:40:14.227155 | orchestrator | changed: [testbed-node-5] 2026-02-14 05:40:14.227168 | orchestrator | 2026-02-14 05:40:14.227180 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-14 05:40:14.227192 | orchestrator | Saturday 14 February 2026 05:39:44 +0000 (0:00:03.030) 0:02:57.320 ***** 2026-02-14 05:40:14.227234 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-14 05:40:14.227248 | orchestrator | 2026-02-14 05:40:14.227261 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-14 05:40:14.227273 | orchestrator | Saturday 14 February 2026 05:39:48 +0000 (0:00:03.179) 0:03:00.500 ***** 2026-02-14 05:40:14.227285 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227297 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227310 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227322 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227334 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227346 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227373 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227387 | orchestrator | 2026-02-14 05:40:14.227399 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-14 05:40:14.227411 | orchestrator | Saturday 14 February 2026 05:39:50 +0000 (0:00:02.027) 0:03:02.527 ***** 2026-02-14 05:40:14.227421 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227432 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227442 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227453 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227463 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227473 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227484 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227494 | orchestrator | 2026-02-14 05:40:14.227505 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-14 05:40:14.227516 | orchestrator | Saturday 14 February 2026 05:39:52 +0000 (0:00:02.298) 0:03:04.826 ***** 2026-02-14 05:40:14.227526 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227537 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227547 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227558 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227568 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227579 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227589 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227600 | orchestrator | 2026-02-14 05:40:14.227610 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-14 05:40:14.227646 | orchestrator | Saturday 14 February 2026 05:39:54 +0000 (0:00:01.927) 0:03:06.753 ***** 2026-02-14 05:40:14.227657 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227668 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227678 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227689 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227699 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227710 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227720 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227731 | orchestrator | 2026-02-14 05:40:14.227759 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-14 05:40:14.227771 | orchestrator | Saturday 14 February 2026 05:39:56 +0000 (0:00:02.236) 0:03:08.990 ***** 2026-02-14 05:40:14.227781 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227792 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227803 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227813 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227824 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227834 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227845 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227855 | orchestrator | 2026-02-14 05:40:14.227866 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-14 05:40:14.227876 | orchestrator | Saturday 14 February 2026 05:39:58 +0000 (0:00:02.008) 0:03:10.998 ***** 2026-02-14 05:40:14.227896 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.227907 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.227918 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.227928 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.227939 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.227950 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.227960 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.227971 | orchestrator | 2026-02-14 05:40:14.227982 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-14 05:40:14.227993 | orchestrator | Saturday 14 February 2026 05:40:00 +0000 (0:00:02.292) 0:03:13.290 ***** 2026-02-14 05:40:14.228003 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228014 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228024 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228035 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228045 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228056 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228066 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228077 | orchestrator | 2026-02-14 05:40:14.228087 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-14 05:40:14.228098 | orchestrator | Saturday 14 February 2026 05:40:02 +0000 (0:00:01.985) 0:03:15.276 ***** 2026-02-14 05:40:14.228109 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228120 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228130 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228141 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228151 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228162 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228172 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228182 | orchestrator | 2026-02-14 05:40:14.228193 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-14 05:40:14.228204 | orchestrator | Saturday 14 February 2026 05:40:05 +0000 (0:00:02.164) 0:03:17.440 ***** 2026-02-14 05:40:14.228214 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228225 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228235 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228246 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228256 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228267 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228277 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228288 | orchestrator | 2026-02-14 05:40:14.228298 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-14 05:40:14.228309 | orchestrator | Saturday 14 February 2026 05:40:07 +0000 (0:00:02.146) 0:03:19.586 ***** 2026-02-14 05:40:14.228320 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228330 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228341 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228351 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228362 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228373 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228383 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228394 | orchestrator | 2026-02-14 05:40:14.228405 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-14 05:40:14.228416 | orchestrator | Saturday 14 February 2026 05:40:09 +0000 (0:00:01.919) 0:03:21.506 ***** 2026-02-14 05:40:14.228432 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228443 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228454 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228464 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228475 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228485 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228503 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228514 | orchestrator | 2026-02-14 05:40:14.228525 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-14 05:40:14.228535 | orchestrator | Saturday 14 February 2026 05:40:11 +0000 (0:00:02.203) 0:03:23.709 ***** 2026-02-14 05:40:14.228546 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228556 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228567 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228577 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228588 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:14.228598 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:14.228609 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:14.228635 | orchestrator | 2026-02-14 05:40:14.228646 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-14 05:40:14.228657 | orchestrator | Saturday 14 February 2026 05:40:13 +0000 (0:00:01.931) 0:03:25.641 ***** 2026-02-14 05:40:14.228668 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:14.228678 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:14.228689 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:14.228700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 05:40:14.228713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 05:40:14.228724 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:14.228742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 05:40:39.980877 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 05:40:39.980984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 05:40:39.980998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 05:40:39.981009 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981020 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981029 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981039 | orchestrator | 2026-02-14 05:40:39.981049 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-14 05:40:39.981061 | orchestrator | Saturday 14 February 2026 05:40:16 +0000 (0:00:02.823) 0:03:28.465 ***** 2026-02-14 05:40:39.981071 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981090 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981099 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981109 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981118 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981128 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981137 | orchestrator | 2026-02-14 05:40:39.981147 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-14 05:40:39.981156 | orchestrator | Saturday 14 February 2026 05:40:17 +0000 (0:00:01.853) 0:03:30.319 ***** 2026-02-14 05:40:39.981166 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981175 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981184 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981194 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981203 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981213 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981222 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981269 | orchestrator | 2026-02-14 05:40:39.981288 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-14 05:40:39.981304 | orchestrator | Saturday 14 February 2026 05:40:20 +0000 (0:00:02.249) 0:03:32.568 ***** 2026-02-14 05:40:39.981320 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981332 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981342 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981351 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981361 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981370 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981379 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981388 | orchestrator | 2026-02-14 05:40:39.981398 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-14 05:40:39.981407 | orchestrator | Saturday 14 February 2026 05:40:22 +0000 (0:00:01.909) 0:03:34.478 ***** 2026-02-14 05:40:39.981417 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981426 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981436 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981445 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981454 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981463 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981473 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981482 | orchestrator | 2026-02-14 05:40:39.981492 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-14 05:40:39.981501 | orchestrator | Saturday 14 February 2026 05:40:24 +0000 (0:00:02.277) 0:03:36.756 ***** 2026-02-14 05:40:39.981511 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981520 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981529 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981553 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981563 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981572 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981581 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981591 | orchestrator | 2026-02-14 05:40:39.981600 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-14 05:40:39.981610 | orchestrator | Saturday 14 February 2026 05:40:26 +0000 (0:00:02.152) 0:03:38.909 ***** 2026-02-14 05:40:39.981620 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981654 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981664 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981674 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981683 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981692 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.981702 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981711 | orchestrator | 2026-02-14 05:40:39.981720 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-14 05:40:39.981730 | orchestrator | Saturday 14 February 2026 05:40:28 +0000 (0:00:01.897) 0:03:40.806 ***** 2026-02-14 05:40:39.981739 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:40:39.981749 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:40:39.981758 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:40:39.981767 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:40:39.981777 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:40:39.981787 | orchestrator | 2026-02-14 05:40:39.981796 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-14 05:40:39.981807 | orchestrator | Saturday 14 February 2026 05:40:31 +0000 (0:00:02.695) 0:03:43.502 ***** 2026-02-14 05:40:39.981818 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:40:39.981829 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:40:39.981840 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:40:39.981851 | orchestrator | 2026-02-14 05:40:39.981861 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-14 05:40:39.981883 | orchestrator | Saturday 14 February 2026 05:40:32 +0000 (0:00:01.516) 0:03:45.019 ***** 2026-02-14 05:40:39.981912 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 05:40:39.981924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 05:40:39.981935 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.981946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 05:40:39.981957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 05:40:39.981968 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.981979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 05:40:39.981990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 05:40:39.982000 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.982011 | orchestrator | 2026-02-14 05:40:39.982079 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-14 05:40:39.982090 | orchestrator | Saturday 14 February 2026 05:40:34 +0000 (0:00:01.400) 0:03:46.420 ***** 2026-02-14 05:40:39.982103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982127 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.982138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982166 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.982177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:39.982199 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.982219 | orchestrator | 2026-02-14 05:40:39.982230 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-14 05:40:39.982241 | orchestrator | Saturday 14 February 2026 05:40:35 +0000 (0:00:01.739) 0:03:48.159 ***** 2026-02-14 05:40:39.982252 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.982262 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.982273 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.982284 | orchestrator | 2026-02-14 05:40:39.982294 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-14 05:40:39.982305 | orchestrator | Saturday 14 February 2026 05:40:37 +0000 (0:00:01.416) 0:03:49.575 ***** 2026-02-14 05:40:39.982316 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.982326 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:39.982337 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:39.982347 | orchestrator | 2026-02-14 05:40:39.982358 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-14 05:40:39.982369 | orchestrator | Saturday 14 February 2026 05:40:38 +0000 (0:00:01.364) 0:03:50.940 ***** 2026-02-14 05:40:39.982380 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:39.982399 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:45.893736 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:45.893865 | orchestrator | 2026-02-14 05:40:45.893889 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-14 05:40:45.893903 | orchestrator | Saturday 14 February 2026 05:40:39 +0000 (0:00:01.354) 0:03:52.294 ***** 2026-02-14 05:40:45.893913 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:45.893923 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:45.893932 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:45.893942 | orchestrator | 2026-02-14 05:40:45.893952 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-14 05:40:45.893962 | orchestrator | Saturday 14 February 2026 05:40:41 +0000 (0:00:01.487) 0:03:53.781 ***** 2026-02-14 05:40:45.893972 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}) 2026-02-14 05:40:45.893983 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}) 2026-02-14 05:40:45.893993 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}) 2026-02-14 05:40:45.894003 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}) 2026-02-14 05:40:45.894012 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}) 2026-02-14 05:40:45.894083 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}) 2026-02-14 05:40:45.894093 | orchestrator | 2026-02-14 05:40:45.894103 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-14 05:40:45.894114 | orchestrator | Saturday 14 February 2026 05:40:44 +0000 (0:00:03.019) 0:03:56.801 ***** 2026-02-14 05:40:45.894146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6/osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1771040013.354532, 'mtime': 1771040013.349532, 'ctime': 1771040013.349532, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6/osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:45.894206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-86d1df08-738c-52e0-accb-8c0a21213af6/osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1771040034.9598572, 'mtime': 1771040034.9578571, 'ctime': 1771040034.9578571, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-86d1df08-738c-52e0-accb-8c0a21213af6/osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:45.894220 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:45.894233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7b577363-2bac-543e-944e-5354861b1af5/osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771040008.9791288, 'mtime': 1771040008.9711287, 'ctime': 1771040008.9711287, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7b577363-2bac-543e-944e-5354861b1af5/osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:45.894251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-df737486-1b51-5b4a-92b8-76d7a8957091/osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771040029.272435, 'mtime': 1771040029.266435, 'ctime': 1771040029.266435, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-df737486-1b51-5b4a-92b8-76d7a8957091/osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:45.894271 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:45.894291 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1745485d-ab31-507e-930d-8d3ce82a0691/osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771040011.8291407, 'mtime': 1771040011.8241405, 'ctime': 1771040011.8241405, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1745485d-ab31-507e-930d-8d3ce82a0691/osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f7da5590-35e5-5703-96c8-37fe127c27f7/osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771040033.6984653, 'mtime': 1771040033.6944654, 'ctime': 1771040033.6944654, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f7da5590-35e5-5703-96c8-37fe127c27f7/osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227151 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:52.227170 | orchestrator | 2026-02-14 05:40:52.227183 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-14 05:40:52.227196 | orchestrator | Saturday 14 February 2026 05:40:45 +0000 (0:00:01.410) 0:03:58.212 ***** 2026-02-14 05:40:52.227208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 05:40:52.227221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 05:40:52.227260 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:52.227271 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 05:40:52.227282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 05:40:52.227293 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:52.227304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 05:40:52.227331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 05:40:52.227342 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:52.227353 | orchestrator | 2026-02-14 05:40:52.227364 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-14 05:40:52.227376 | orchestrator | Saturday 14 February 2026 05:40:47 +0000 (0:00:01.461) 0:03:59.674 ***** 2026-02-14 05:40:52.227388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227412 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:52.227423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227465 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:52.227476 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227497 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:52.227508 | orchestrator | 2026-02-14 05:40:52.227519 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-14 05:40:52.227530 | orchestrator | Saturday 14 February 2026 05:40:48 +0000 (0:00:01.417) 0:04:01.091 ***** 2026-02-14 05:40:52.227542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'})  2026-02-14 05:40:52.227565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'})  2026-02-14 05:40:52.227578 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:52.227591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'})  2026-02-14 05:40:52.227604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'})  2026-02-14 05:40:52.227640 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:52.227653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'})  2026-02-14 05:40:52.227666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'})  2026-02-14 05:40:52.227679 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:40:52.227691 | orchestrator | 2026-02-14 05:40:52.227703 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-14 05:40:52.227716 | orchestrator | Saturday 14 February 2026 05:40:50 +0000 (0:00:01.826) 0:04:02.918 ***** 2026-02-14 05:40:52.227734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-d74a1ea4-c27e-5375-be56-9d9a8e069fa6', 'data_vg': 'ceph-d74a1ea4-c27e-5375-be56-9d9a8e069fa6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-86d1df08-738c-52e0-accb-8c0a21213af6', 'data_vg': 'ceph-86d1df08-738c-52e0-accb-8c0a21213af6'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227760 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:40:52.227773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7b577363-2bac-543e-944e-5354861b1af5', 'data_vg': 'ceph-7b577363-2bac-543e-944e-5354861b1af5'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-df737486-1b51-5b4a-92b8-76d7a8957091', 'data_vg': 'ceph-df737486-1b51-5b4a-92b8-76d7a8957091'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227798 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:40:52.227811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1745485d-ab31-507e-930d-8d3ce82a0691', 'data_vg': 'ceph-1745485d-ab31-507e-930d-8d3ce82a0691'}, 'ansible_loop_var': 'item'})  2026-02-14 05:40:52.227830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f7da5590-35e5-5703-96c8-37fe127c27f7', 'data_vg': 'ceph-f7da5590-35e5-5703-96c8-37fe127c27f7'}, 'ansible_loop_var': 'item'})  2026-02-14 05:41:01.989428 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:01.989551 | orchestrator | 2026-02-14 05:41:01.989572 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-14 05:41:01.989586 | orchestrator | Saturday 14 February 2026 05:40:52 +0000 (0:00:01.623) 0:04:04.541 ***** 2026-02-14 05:41:01.989651 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:01.989695 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:01.989706 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:01.989717 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:01.989728 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:01.989738 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:01.989749 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:01.989760 | orchestrator | 2026-02-14 05:41:01.989783 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-14 05:41:01.989794 | orchestrator | Saturday 14 February 2026 05:40:54 +0000 (0:00:02.014) 0:04:06.556 ***** 2026-02-14 05:41:01.989805 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:01.989816 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:01.989827 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:01.989837 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:01.989848 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 05:41:01.989860 | orchestrator | 2026-02-14 05:41:01.989870 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-14 05:41:01.989881 | orchestrator | Saturday 14 February 2026 05:40:56 +0000 (0:00:02.644) 0:04:09.200 ***** 2026-02-14 05:41:01.989893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989947 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:01.989960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.989998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990077 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990094 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:01.990107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990170 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:01.990183 | orchestrator | 2026-02-14 05:41:01.990196 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-14 05:41:01.990220 | orchestrator | Saturday 14 February 2026 05:40:58 +0000 (0:00:01.415) 0:04:10.616 ***** 2026-02-14 05:41:01.990232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990317 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:01.990328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990381 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:01.990392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990446 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:01.990456 | orchestrator | 2026-02-14 05:41:01.990467 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-14 05:41:01.990478 | orchestrator | Saturday 14 February 2026 05:40:59 +0000 (0:00:01.701) 0:04:12.317 ***** 2026-02-14 05:41:01.990490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990564 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:01.990583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990711 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:01.990722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 05:41:01.990776 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:01.990786 | orchestrator | 2026-02-14 05:41:01.990797 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-14 05:41:01.990808 | orchestrator | Saturday 14 February 2026 05:41:01 +0000 (0:00:01.512) 0:04:13.830 ***** 2026-02-14 05:41:01.990819 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:01.990833 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:01.990862 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.267208 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.267304 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.267317 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.267326 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.267335 | orchestrator | 2026-02-14 05:41:19.267344 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-14 05:41:19.267355 | orchestrator | Saturday 14 February 2026 05:41:03 +0000 (0:00:02.065) 0:04:15.895 ***** 2026-02-14 05:41:19.267363 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.267372 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.267381 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.267499 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.267517 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.267529 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.267544 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.267559 | orchestrator | 2026-02-14 05:41:19.267627 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-14 05:41:19.267646 | orchestrator | Saturday 14 February 2026 05:41:05 +0000 (0:00:02.202) 0:04:18.098 ***** 2026-02-14 05:41:19.267660 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.267675 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.267686 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.267694 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.267703 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.267711 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.267720 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.267729 | orchestrator | 2026-02-14 05:41:19.267737 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-14 05:41:19.267747 | orchestrator | Saturday 14 February 2026 05:41:08 +0000 (0:00:02.362) 0:04:20.461 ***** 2026-02-14 05:41:19.267778 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.267789 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.267799 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.267809 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.267819 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.267832 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.267847 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.267860 | orchestrator | 2026-02-14 05:41:19.267876 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-14 05:41:19.267891 | orchestrator | Saturday 14 February 2026 05:41:10 +0000 (0:00:02.004) 0:04:22.466 ***** 2026-02-14 05:41:19.267904 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.267914 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.267924 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.267933 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.267943 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.267952 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.267961 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.267971 | orchestrator | 2026-02-14 05:41:19.267982 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-14 05:41:19.267991 | orchestrator | Saturday 14 February 2026 05:41:12 +0000 (0:00:02.226) 0:04:24.692 ***** 2026-02-14 05:41:19.268001 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.268011 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.268020 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.268030 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.268039 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.268049 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.268059 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.268069 | orchestrator | 2026-02-14 05:41:19.268093 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-14 05:41:19.268104 | orchestrator | Saturday 14 February 2026 05:41:14 +0000 (0:00:01.920) 0:04:26.612 ***** 2026-02-14 05:41:19.268114 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.268123 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.268132 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.268141 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.268150 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:19.268158 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:19.268166 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:19.268175 | orchestrator | 2026-02-14 05:41:19.268183 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-14 05:41:19.268192 | orchestrator | Saturday 14 February 2026 05:41:16 +0000 (0:00:02.414) 0:04:29.027 ***** 2026-02-14 05:41:19.268202 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268212 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:19.268223 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:19.268233 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:19.268242 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:19.268253 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:19.268270 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:19.268299 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268308 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:19.268317 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:19.268326 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:19.268334 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:19.268343 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:19.268352 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:19.268360 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268369 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:19.268377 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:19.268386 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:19.268394 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:19.268403 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:19.268412 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:19.268425 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268433 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:19.268442 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:19.268451 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:19.268459 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:19.268468 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:19.268482 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268491 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:19.268500 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:19.268508 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:19.268522 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.614451 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.614557 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.614629 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.614642 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.614653 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.614664 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.614674 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.614684 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.614695 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.614706 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:22.614717 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.614727 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.614736 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.614764 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:22.614775 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.614785 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.614813 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:22.614824 | orchestrator | 2026-02-14 05:41:22.614834 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-14 05:41:22.614846 | orchestrator | Saturday 14 February 2026 05:41:19 +0000 (0:00:02.554) 0:04:31.582 ***** 2026-02-14 05:41:22.614855 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:22.614864 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:22.614874 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:22.614883 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:22.614892 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:22.614902 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:22.614911 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:22.614920 | orchestrator | 2026-02-14 05:41:22.614930 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-14 05:41:22.614940 | orchestrator | Saturday 14 February 2026 05:41:21 +0000 (0:00:02.419) 0:04:34.001 ***** 2026-02-14 05:41:22.614950 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.614959 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.614969 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.614979 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.615009 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.615021 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.615032 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:22.615044 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.615054 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.615065 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.615076 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.615087 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.615099 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.615110 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:22.615121 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.615132 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.615150 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.615161 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.615177 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:22.615189 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:22.615200 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:22.615211 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.615222 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.615232 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:22.615243 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.615254 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:22.615265 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:22.615277 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:22.615293 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:52.019723 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:52.019834 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:52.019851 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:52.019864 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:52.019875 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:52.019887 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.019899 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.019911 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:52.019949 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:52.019962 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-14 05:41:52.019973 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-14 05:41:52.019984 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-14 05:41:52.019996 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:52.020022 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:52.020034 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:52.020045 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020056 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-14 05:41:52.020067 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-14 05:41:52.020078 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-14 05:41:52.020089 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020100 | orchestrator | 2026-02-14 05:41:52.020112 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-14 05:41:52.020124 | orchestrator | Saturday 14 February 2026 05:41:24 +0000 (0:00:02.449) 0:04:36.451 ***** 2026-02-14 05:41:52.020135 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.020146 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.020157 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.020167 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.020178 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.020188 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020199 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020210 | orchestrator | 2026-02-14 05:41:52.020221 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-14 05:41:52.020232 | orchestrator | Saturday 14 February 2026 05:41:26 +0000 (0:00:02.391) 0:04:38.843 ***** 2026-02-14 05:41:52.020243 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.020254 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.020267 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.020279 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.020292 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.020305 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020318 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020331 | orchestrator | 2026-02-14 05:41:52.020361 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-14 05:41:52.020375 | orchestrator | Saturday 14 February 2026 05:41:28 +0000 (0:00:02.091) 0:04:40.934 ***** 2026-02-14 05:41:52.020395 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.020414 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.020433 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.020453 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.020467 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.020480 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020493 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020505 | orchestrator | 2026-02-14 05:41:52.020518 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-14 05:41:52.020530 | orchestrator | Saturday 14 February 2026 05:41:31 +0000 (0:00:02.438) 0:04:43.373 ***** 2026-02-14 05:41:52.020570 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-14 05:41:52.020584 | orchestrator | 2026-02-14 05:41:52.020597 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-14 05:41:52.020610 | orchestrator | Saturday 14 February 2026 05:41:33 +0000 (0:00:02.858) 0:04:46.232 ***** 2026-02-14 05:41:52.020623 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020636 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020647 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020658 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020668 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020679 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020689 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-14 05:41:52.020700 | orchestrator | 2026-02-14 05:41:52.020711 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-14 05:41:52.020722 | orchestrator | Saturday 14 February 2026 05:41:35 +0000 (0:00:02.027) 0:04:48.260 ***** 2026-02-14 05:41:52.020733 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.020744 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.020754 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.020766 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.020777 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.020788 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020799 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020810 | orchestrator | 2026-02-14 05:41:52.020821 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-14 05:41:52.020831 | orchestrator | Saturday 14 February 2026 05:41:38 +0000 (0:00:02.170) 0:04:50.430 ***** 2026-02-14 05:41:52.020842 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.020859 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.020870 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.020881 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.020892 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.020902 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.020913 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.020924 | orchestrator | 2026-02-14 05:41:52.020935 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-14 05:41:52.020946 | orchestrator | Saturday 14 February 2026 05:41:40 +0000 (0:00:01.911) 0:04:52.341 ***** 2026-02-14 05:41:52.020957 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:41:52.020969 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:41:52.020980 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:41:52.020990 | orchestrator | ok: [testbed-node-3] 2026-02-14 05:41:52.021001 | orchestrator | ok: [testbed-node-4] 2026-02-14 05:41:52.021019 | orchestrator | ok: [testbed-node-5] 2026-02-14 05:41:52.021030 | orchestrator | ok: [testbed-manager] 2026-02-14 05:41:52.021041 | orchestrator | 2026-02-14 05:41:52.021052 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-14 05:41:52.021062 | orchestrator | Saturday 14 February 2026 05:41:42 +0000 (0:00:02.609) 0:04:54.951 ***** 2026-02-14 05:41:52.021073 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.021084 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.021094 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.021110 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.021121 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.021131 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.021142 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.021153 | orchestrator | 2026-02-14 05:41:52.021164 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-14 05:41:52.021174 | orchestrator | Saturday 14 February 2026 05:41:45 +0000 (0:00:02.400) 0:04:57.352 ***** 2026-02-14 05:41:52.021185 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:41:52.021196 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:41:52.021206 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:41:52.021217 | orchestrator | skipping: [testbed-node-3] 2026-02-14 05:41:52.021228 | orchestrator | skipping: [testbed-node-4] 2026-02-14 05:41:52.021238 | orchestrator | skipping: [testbed-node-5] 2026-02-14 05:41:52.021249 | orchestrator | skipping: [testbed-manager] 2026-02-14 05:41:52.021260 | orchestrator | 2026-02-14 05:41:52.021271 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-14 05:41:52.021282 | orchestrator | Saturday 14 February 2026 05:41:47 +0000 (0:00:02.397) 0:04:59.749 ***** 2026-02-14 05:41:52.021292 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:41:52.021303 | orchestrator | 2026-02-14 05:41:52.021314 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-14 05:41:52.021325 | orchestrator | Saturday 14 February 2026 05:41:50 +0000 (0:00:02.622) 0:05:02.371 ***** 2026-02-14 05:41:52.021343 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:32.858359 | orchestrator | 2026-02-14 05:42:32.858553 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-14 05:42:32.858575 | orchestrator | 2026-02-14 05:42:32.858587 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 05:42:32.858599 | orchestrator | Saturday 14 February 2026 05:41:52 +0000 (0:00:01.965) 0:05:04.337 ***** 2026-02-14 05:42:32.858611 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.858623 | orchestrator | 2026-02-14 05:42:32.858634 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 05:42:32.858645 | orchestrator | Saturday 14 February 2026 05:41:53 +0000 (0:00:01.564) 0:05:05.901 ***** 2026-02-14 05:42:32.858656 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.858667 | orchestrator | 2026-02-14 05:42:32.858677 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-14 05:42:32.858688 | orchestrator | Saturday 14 February 2026 05:41:54 +0000 (0:00:01.180) 0:05:07.082 ***** 2026-02-14 05:42:32.858706 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:42:32.858729 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:42:32.858749 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-14 05:42:32.858804 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-14 05:42:32.858846 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-14 05:42:32.858871 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}])  2026-02-14 05:42:32.858893 | orchestrator | 2026-02-14 05:42:32.858916 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-14 05:42:32.858936 | orchestrator | 2026-02-14 05:42:32.858958 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-14 05:42:32.858979 | orchestrator | Saturday 14 February 2026 05:42:05 +0000 (0:00:10.262) 0:05:17.345 ***** 2026-02-14 05:42:32.858998 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859019 | orchestrator | 2026-02-14 05:42:32.859040 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-14 05:42:32.859061 | orchestrator | Saturday 14 February 2026 05:42:06 +0000 (0:00:01.489) 0:05:18.834 ***** 2026-02-14 05:42:32.859082 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859104 | orchestrator | 2026-02-14 05:42:32.859117 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-14 05:42:32.859131 | orchestrator | Saturday 14 February 2026 05:42:07 +0000 (0:00:01.166) 0:05:20.001 ***** 2026-02-14 05:42:32.859145 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:32.859158 | orchestrator | 2026-02-14 05:42:32.859171 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-14 05:42:32.859183 | orchestrator | Saturday 14 February 2026 05:42:08 +0000 (0:00:01.189) 0:05:21.191 ***** 2026-02-14 05:42:32.859196 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859208 | orchestrator | 2026-02-14 05:42:32.859220 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:42:32.859233 | orchestrator | Saturday 14 February 2026 05:42:10 +0000 (0:00:01.162) 0:05:22.353 ***** 2026-02-14 05:42:32.859245 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-14 05:42:32.859258 | orchestrator | 2026-02-14 05:42:32.859289 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 05:42:32.859301 | orchestrator | Saturday 14 February 2026 05:42:11 +0000 (0:00:01.211) 0:05:23.565 ***** 2026-02-14 05:42:32.859311 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859322 | orchestrator | 2026-02-14 05:42:32.859333 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 05:42:32.859343 | orchestrator | Saturday 14 February 2026 05:42:12 +0000 (0:00:01.469) 0:05:25.035 ***** 2026-02-14 05:42:32.859354 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859365 | orchestrator | 2026-02-14 05:42:32.859376 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 05:42:32.859398 | orchestrator | Saturday 14 February 2026 05:42:14 +0000 (0:00:01.311) 0:05:26.347 ***** 2026-02-14 05:42:32.859409 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859419 | orchestrator | 2026-02-14 05:42:32.859430 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 05:42:32.859441 | orchestrator | Saturday 14 February 2026 05:42:15 +0000 (0:00:01.577) 0:05:27.925 ***** 2026-02-14 05:42:32.859451 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859462 | orchestrator | 2026-02-14 05:42:32.859473 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 05:42:32.859508 | orchestrator | Saturday 14 February 2026 05:42:16 +0000 (0:00:01.154) 0:05:29.079 ***** 2026-02-14 05:42:32.859519 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859530 | orchestrator | 2026-02-14 05:42:32.859541 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 05:42:32.859552 | orchestrator | Saturday 14 February 2026 05:42:17 +0000 (0:00:01.217) 0:05:30.297 ***** 2026-02-14 05:42:32.859562 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859573 | orchestrator | 2026-02-14 05:42:32.859584 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 05:42:32.859595 | orchestrator | Saturday 14 February 2026 05:42:19 +0000 (0:00:01.243) 0:05:31.540 ***** 2026-02-14 05:42:32.859606 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:32.859617 | orchestrator | 2026-02-14 05:42:32.859627 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 05:42:32.859638 | orchestrator | Saturday 14 February 2026 05:42:20 +0000 (0:00:01.195) 0:05:32.736 ***** 2026-02-14 05:42:32.859649 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859660 | orchestrator | 2026-02-14 05:42:32.859671 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 05:42:32.859681 | orchestrator | Saturday 14 February 2026 05:42:21 +0000 (0:00:01.270) 0:05:34.007 ***** 2026-02-14 05:42:32.859692 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:42:32.859703 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:42:32.859714 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:42:32.859724 | orchestrator | 2026-02-14 05:42:32.859735 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 05:42:32.859746 | orchestrator | Saturday 14 February 2026 05:42:23 +0000 (0:00:01.680) 0:05:35.687 ***** 2026-02-14 05:42:32.859757 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:32.859768 | orchestrator | 2026-02-14 05:42:32.859779 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 05:42:32.859797 | orchestrator | Saturday 14 February 2026 05:42:24 +0000 (0:00:01.308) 0:05:36.996 ***** 2026-02-14 05:42:32.859808 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:42:32.859819 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:42:32.859829 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:42:32.859840 | orchestrator | 2026-02-14 05:42:32.859851 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 05:42:32.859861 | orchestrator | Saturday 14 February 2026 05:42:27 +0000 (0:00:03.277) 0:05:40.274 ***** 2026-02-14 05:42:32.859872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:42:32.859883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:42:32.859894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:42:32.859905 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:32.859916 | orchestrator | 2026-02-14 05:42:32.859926 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 05:42:32.859937 | orchestrator | Saturday 14 February 2026 05:42:29 +0000 (0:00:01.484) 0:05:41.758 ***** 2026-02-14 05:42:32.859950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 05:42:32.859970 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 05:42:32.859981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 05:42:32.859993 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:32.860004 | orchestrator | 2026-02-14 05:42:32.860014 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 05:42:32.860025 | orchestrator | Saturday 14 February 2026 05:42:31 +0000 (0:00:02.159) 0:05:43.918 ***** 2026-02-14 05:42:32.860045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:54.587039 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:54.587159 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:54.587176 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587189 | orchestrator | 2026-02-14 05:42:54.587201 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 05:42:54.587213 | orchestrator | Saturday 14 February 2026 05:42:32 +0000 (0:00:01.256) 0:05:45.175 ***** 2026-02-14 05:42:54.587227 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '775cd2ba237c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 05:42:25.194454', 'end': '2026-02-14 05:42:25.257153', 'delta': '0:00:00.062699', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['775cd2ba237c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 05:42:54.587260 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26dcb1313f5c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 05:42:25.767229', 'end': '2026-02-14 05:42:25.815303', 'delta': '0:00:00.048074', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26dcb1313f5c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 05:42:54.587294 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7aff8e7c54ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 05:42:26.750773', 'end': '2026-02-14 05:42:26.791948', 'delta': '0:00:00.041175', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7aff8e7c54ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 05:42:54.587307 | orchestrator | 2026-02-14 05:42:54.587318 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 05:42:54.587329 | orchestrator | Saturday 14 February 2026 05:42:34 +0000 (0:00:01.338) 0:05:46.513 ***** 2026-02-14 05:42:54.587340 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:54.587352 | orchestrator | 2026-02-14 05:42:54.587363 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 05:42:54.587374 | orchestrator | Saturday 14 February 2026 05:42:36 +0000 (0:00:01.844) 0:05:48.358 ***** 2026-02-14 05:42:54.587385 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587396 | orchestrator | 2026-02-14 05:42:54.587407 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 05:42:54.587418 | orchestrator | Saturday 14 February 2026 05:42:37 +0000 (0:00:01.303) 0:05:49.661 ***** 2026-02-14 05:42:54.587429 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:54.587440 | orchestrator | 2026-02-14 05:42:54.587451 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 05:42:54.587506 | orchestrator | Saturday 14 February 2026 05:42:38 +0000 (0:00:01.231) 0:05:50.893 ***** 2026-02-14 05:42:54.587536 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-14 05:42:54.587548 | orchestrator | 2026-02-14 05:42:54.587561 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:42:54.587573 | orchestrator | Saturday 14 February 2026 05:42:41 +0000 (0:00:02.605) 0:05:53.499 ***** 2026-02-14 05:42:54.587586 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:42:54.587599 | orchestrator | 2026-02-14 05:42:54.587612 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 05:42:54.587625 | orchestrator | Saturday 14 February 2026 05:42:42 +0000 (0:00:01.229) 0:05:54.728 ***** 2026-02-14 05:42:54.587637 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587649 | orchestrator | 2026-02-14 05:42:54.587662 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 05:42:54.587674 | orchestrator | Saturday 14 February 2026 05:42:43 +0000 (0:00:01.181) 0:05:55.910 ***** 2026-02-14 05:42:54.587687 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587700 | orchestrator | 2026-02-14 05:42:54.587712 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:42:54.587725 | orchestrator | Saturday 14 February 2026 05:42:44 +0000 (0:00:01.251) 0:05:57.162 ***** 2026-02-14 05:42:54.587737 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587750 | orchestrator | 2026-02-14 05:42:54.587762 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 05:42:54.587775 | orchestrator | Saturday 14 February 2026 05:42:46 +0000 (0:00:01.175) 0:05:58.337 ***** 2026-02-14 05:42:54.587788 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587800 | orchestrator | 2026-02-14 05:42:54.587813 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 05:42:54.587825 | orchestrator | Saturday 14 February 2026 05:42:47 +0000 (0:00:01.253) 0:05:59.590 ***** 2026-02-14 05:42:54.587847 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587859 | orchestrator | 2026-02-14 05:42:54.587872 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 05:42:54.587884 | orchestrator | Saturday 14 February 2026 05:42:48 +0000 (0:00:01.222) 0:06:00.813 ***** 2026-02-14 05:42:54.587896 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587910 | orchestrator | 2026-02-14 05:42:54.587922 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 05:42:54.587934 | orchestrator | Saturday 14 February 2026 05:42:49 +0000 (0:00:01.214) 0:06:02.028 ***** 2026-02-14 05:42:54.587945 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.587956 | orchestrator | 2026-02-14 05:42:54.587967 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 05:42:54.587979 | orchestrator | Saturday 14 February 2026 05:42:50 +0000 (0:00:01.190) 0:06:03.219 ***** 2026-02-14 05:42:54.587989 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.588001 | orchestrator | 2026-02-14 05:42:54.588017 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 05:42:54.588030 | orchestrator | Saturday 14 February 2026 05:42:52 +0000 (0:00:01.132) 0:06:04.351 ***** 2026-02-14 05:42:54.588041 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:54.588052 | orchestrator | 2026-02-14 05:42:54.588063 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 05:42:54.588073 | orchestrator | Saturday 14 February 2026 05:42:53 +0000 (0:00:01.209) 0:06:05.561 ***** 2026-02-14 05:42:54.588085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:54.588097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:54.588109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:54.588121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:42:54.588142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:55.840137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:55.840294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:55.840353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:42:55.840380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:55.840393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:42:55.840405 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:42:55.840418 | orchestrator | 2026-02-14 05:42:55.840430 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 05:42:55.840451 | orchestrator | Saturday 14 February 2026 05:42:54 +0000 (0:00:01.329) 0:06:06.891 ***** 2026-02-14 05:42:55.840522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:42:55.840642 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:43:20.823574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:43:20.823699 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:43:20.823718 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:43:20.823755 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.823771 | orchestrator | 2026-02-14 05:43:20.823799 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 05:43:20.823812 | orchestrator | Saturday 14 February 2026 05:42:55 +0000 (0:00:01.267) 0:06:08.158 ***** 2026-02-14 05:43:20.823823 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:43:20.823836 | orchestrator | 2026-02-14 05:43:20.823847 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 05:43:20.823858 | orchestrator | Saturday 14 February 2026 05:42:57 +0000 (0:00:01.576) 0:06:09.735 ***** 2026-02-14 05:43:20.823869 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:43:20.823880 | orchestrator | 2026-02-14 05:43:20.823890 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:43:20.823919 | orchestrator | Saturday 14 February 2026 05:42:58 +0000 (0:00:01.105) 0:06:10.841 ***** 2026-02-14 05:43:20.823931 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:43:20.823941 | orchestrator | 2026-02-14 05:43:20.823952 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:43:20.823963 | orchestrator | Saturday 14 February 2026 05:43:00 +0000 (0:00:01.488) 0:06:12.330 ***** 2026-02-14 05:43:20.823974 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.823984 | orchestrator | 2026-02-14 05:43:20.823995 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:43:20.824005 | orchestrator | Saturday 14 February 2026 05:43:01 +0000 (0:00:01.199) 0:06:13.529 ***** 2026-02-14 05:43:20.824016 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824027 | orchestrator | 2026-02-14 05:43:20.824037 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:43:20.824050 | orchestrator | Saturday 14 February 2026 05:43:02 +0000 (0:00:01.326) 0:06:14.856 ***** 2026-02-14 05:43:20.824063 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824076 | orchestrator | 2026-02-14 05:43:20.824088 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:43:20.824100 | orchestrator | Saturday 14 February 2026 05:43:03 +0000 (0:00:01.221) 0:06:16.078 ***** 2026-02-14 05:43:20.824113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:43:20.824126 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 05:43:20.824139 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 05:43:20.824152 | orchestrator | 2026-02-14 05:43:20.824164 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:43:20.824183 | orchestrator | Saturday 14 February 2026 05:43:05 +0000 (0:00:02.187) 0:06:18.265 ***** 2026-02-14 05:43:20.824196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:43:20.824208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:43:20.824220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:43:20.824233 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824245 | orchestrator | 2026-02-14 05:43:20.824258 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 05:43:20.824270 | orchestrator | Saturday 14 February 2026 05:43:07 +0000 (0:00:01.177) 0:06:19.443 ***** 2026-02-14 05:43:20.824282 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824294 | orchestrator | 2026-02-14 05:43:20.824307 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 05:43:20.824319 | orchestrator | Saturday 14 February 2026 05:43:08 +0000 (0:00:01.345) 0:06:20.789 ***** 2026-02-14 05:43:20.824331 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:43:20.824344 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:43:20.824365 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:43:20.824377 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:43:20.824390 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:43:20.824402 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:43:20.824415 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:43:20.824469 | orchestrator | 2026-02-14 05:43:20.824490 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 05:43:20.824508 | orchestrator | Saturday 14 February 2026 05:43:10 +0000 (0:00:02.355) 0:06:23.145 ***** 2026-02-14 05:43:20.824525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:43:20.824536 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:43:20.824547 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:43:20.824557 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:43:20.824568 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:43:20.824578 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:43:20.824589 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:43:20.824600 | orchestrator | 2026-02-14 05:43:20.824610 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-14 05:43:20.824621 | orchestrator | Saturday 14 February 2026 05:43:13 +0000 (0:00:03.058) 0:06:26.204 ***** 2026-02-14 05:43:20.824632 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-14 05:43:20.824643 | orchestrator | 2026-02-14 05:43:20.824653 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-14 05:43:20.824664 | orchestrator | Saturday 14 February 2026 05:43:16 +0000 (0:00:02.199) 0:06:28.403 ***** 2026-02-14 05:43:20.824675 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824685 | orchestrator | 2026-02-14 05:43:20.824696 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-14 05:43:20.824707 | orchestrator | Saturday 14 February 2026 05:43:17 +0000 (0:00:01.214) 0:06:29.618 ***** 2026-02-14 05:43:20.824717 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:43:20.824728 | orchestrator | 2026-02-14 05:43:20.824738 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-14 05:43:20.824749 | orchestrator | Saturday 14 February 2026 05:43:18 +0000 (0:00:01.201) 0:06:30.819 ***** 2026-02-14 05:43:20.824760 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-14 05:43:20.824770 | orchestrator | 2026-02-14 05:43:20.824781 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-14 05:43:20.824799 | orchestrator | Saturday 14 February 2026 05:43:20 +0000 (0:00:02.315) 0:06:33.135 ***** 2026-02-14 05:44:23.415299 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415482 | orchestrator | 2026-02-14 05:44:23.415498 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-14 05:44:23.415507 | orchestrator | Saturday 14 February 2026 05:43:21 +0000 (0:00:01.165) 0:06:34.301 ***** 2026-02-14 05:44:23.415518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:44:23.415527 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:44:23.415536 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:44:23.415543 | orchestrator | 2026-02-14 05:44:23.415552 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-14 05:44:23.415559 | orchestrator | Saturday 14 February 2026 05:43:24 +0000 (0:00:02.667) 0:06:36.969 ***** 2026-02-14 05:44:23.415588 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-14 05:44:23.415596 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-14 05:44:23.415606 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-14 05:44:23.415613 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-14 05:44:23.415621 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-14 05:44:23.415641 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-14 05:44:23.415649 | orchestrator | 2026-02-14 05:44:23.415657 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-14 05:44:23.415665 | orchestrator | Saturday 14 February 2026 05:43:38 +0000 (0:00:13.574) 0:06:50.544 ***** 2026-02-14 05:44:23.415673 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:44:23.415681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:44:23.415689 | orchestrator | 2026-02-14 05:44:23.415697 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-14 05:44:23.415705 | orchestrator | Saturday 14 February 2026 05:43:42 +0000 (0:00:03.872) 0:06:54.417 ***** 2026-02-14 05:44:23.415713 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:44:23.415720 | orchestrator | 2026-02-14 05:44:23.415728 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:44:23.415736 | orchestrator | Saturday 14 February 2026 05:43:44 +0000 (0:00:02.559) 0:06:56.976 ***** 2026-02-14 05:44:23.415744 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-14 05:44:23.415752 | orchestrator | 2026-02-14 05:44:23.415759 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 05:44:23.415767 | orchestrator | Saturday 14 February 2026 05:43:46 +0000 (0:00:01.456) 0:06:58.433 ***** 2026-02-14 05:44:23.415775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-14 05:44:23.415783 | orchestrator | 2026-02-14 05:44:23.415790 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 05:44:23.415796 | orchestrator | Saturday 14 February 2026 05:43:47 +0000 (0:00:01.750) 0:07:00.184 ***** 2026-02-14 05:44:23.415803 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.415810 | orchestrator | 2026-02-14 05:44:23.415816 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 05:44:23.415823 | orchestrator | Saturday 14 February 2026 05:43:49 +0000 (0:00:01.584) 0:07:01.768 ***** 2026-02-14 05:44:23.415829 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415837 | orchestrator | 2026-02-14 05:44:23.415845 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 05:44:23.415853 | orchestrator | Saturday 14 February 2026 05:43:50 +0000 (0:00:01.141) 0:07:02.910 ***** 2026-02-14 05:44:23.415860 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415868 | orchestrator | 2026-02-14 05:44:23.415875 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 05:44:23.415883 | orchestrator | Saturday 14 February 2026 05:43:51 +0000 (0:00:01.145) 0:07:04.056 ***** 2026-02-14 05:44:23.415890 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415898 | orchestrator | 2026-02-14 05:44:23.415905 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 05:44:23.415913 | orchestrator | Saturday 14 February 2026 05:43:52 +0000 (0:00:01.134) 0:07:05.190 ***** 2026-02-14 05:44:23.415920 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.415928 | orchestrator | 2026-02-14 05:44:23.415935 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 05:44:23.415949 | orchestrator | Saturday 14 February 2026 05:43:54 +0000 (0:00:01.521) 0:07:06.712 ***** 2026-02-14 05:44:23.415958 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415965 | orchestrator | 2026-02-14 05:44:23.415973 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 05:44:23.415981 | orchestrator | Saturday 14 February 2026 05:43:55 +0000 (0:00:01.154) 0:07:07.866 ***** 2026-02-14 05:44:23.415988 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.415996 | orchestrator | 2026-02-14 05:44:23.416003 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 05:44:23.416010 | orchestrator | Saturday 14 February 2026 05:43:56 +0000 (0:00:01.132) 0:07:08.999 ***** 2026-02-14 05:44:23.416018 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416026 | orchestrator | 2026-02-14 05:44:23.416034 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 05:44:23.416041 | orchestrator | Saturday 14 February 2026 05:43:58 +0000 (0:00:01.561) 0:07:10.561 ***** 2026-02-14 05:44:23.416048 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416056 | orchestrator | 2026-02-14 05:44:23.416077 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 05:44:23.416085 | orchestrator | Saturday 14 February 2026 05:43:59 +0000 (0:00:01.590) 0:07:12.152 ***** 2026-02-14 05:44:23.416093 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416100 | orchestrator | 2026-02-14 05:44:23.416107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:44:23.416115 | orchestrator | Saturday 14 February 2026 05:44:00 +0000 (0:00:01.158) 0:07:13.310 ***** 2026-02-14 05:44:23.416122 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416129 | orchestrator | 2026-02-14 05:44:23.416137 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:44:23.416144 | orchestrator | Saturday 14 February 2026 05:44:02 +0000 (0:00:01.310) 0:07:14.620 ***** 2026-02-14 05:44:23.416152 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416159 | orchestrator | 2026-02-14 05:44:23.416167 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:44:23.416175 | orchestrator | Saturday 14 February 2026 05:44:03 +0000 (0:00:01.226) 0:07:15.847 ***** 2026-02-14 05:44:23.416182 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416190 | orchestrator | 2026-02-14 05:44:23.416197 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:44:23.416203 | orchestrator | Saturday 14 February 2026 05:44:04 +0000 (0:00:01.164) 0:07:17.012 ***** 2026-02-14 05:44:23.416209 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416216 | orchestrator | 2026-02-14 05:44:23.416222 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:44:23.416229 | orchestrator | Saturday 14 February 2026 05:44:05 +0000 (0:00:01.209) 0:07:18.222 ***** 2026-02-14 05:44:23.416240 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416246 | orchestrator | 2026-02-14 05:44:23.416253 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:44:23.416259 | orchestrator | Saturday 14 February 2026 05:44:07 +0000 (0:00:01.168) 0:07:19.390 ***** 2026-02-14 05:44:23.416266 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416272 | orchestrator | 2026-02-14 05:44:23.416279 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:44:23.416285 | orchestrator | Saturday 14 February 2026 05:44:08 +0000 (0:00:01.158) 0:07:20.549 ***** 2026-02-14 05:44:23.416292 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416298 | orchestrator | 2026-02-14 05:44:23.416305 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:44:23.416311 | orchestrator | Saturday 14 February 2026 05:44:09 +0000 (0:00:01.164) 0:07:21.713 ***** 2026-02-14 05:44:23.416318 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416324 | orchestrator | 2026-02-14 05:44:23.416331 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:44:23.416342 | orchestrator | Saturday 14 February 2026 05:44:10 +0000 (0:00:01.206) 0:07:22.919 ***** 2026-02-14 05:44:23.416348 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:44:23.416355 | orchestrator | 2026-02-14 05:44:23.416378 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:44:23.416384 | orchestrator | Saturday 14 February 2026 05:44:11 +0000 (0:00:01.154) 0:07:24.074 ***** 2026-02-14 05:44:23.416391 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416397 | orchestrator | 2026-02-14 05:44:23.416404 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:44:23.416410 | orchestrator | Saturday 14 February 2026 05:44:12 +0000 (0:00:01.131) 0:07:25.206 ***** 2026-02-14 05:44:23.416417 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416423 | orchestrator | 2026-02-14 05:44:23.416430 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:44:23.416436 | orchestrator | Saturday 14 February 2026 05:44:14 +0000 (0:00:01.129) 0:07:26.336 ***** 2026-02-14 05:44:23.416443 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416449 | orchestrator | 2026-02-14 05:44:23.416456 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:44:23.416462 | orchestrator | Saturday 14 February 2026 05:44:15 +0000 (0:00:01.146) 0:07:27.482 ***** 2026-02-14 05:44:23.416469 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416475 | orchestrator | 2026-02-14 05:44:23.416482 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:44:23.416488 | orchestrator | Saturday 14 February 2026 05:44:16 +0000 (0:00:01.184) 0:07:28.667 ***** 2026-02-14 05:44:23.416495 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416501 | orchestrator | 2026-02-14 05:44:23.416508 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:44:23.416514 | orchestrator | Saturday 14 February 2026 05:44:17 +0000 (0:00:01.162) 0:07:29.829 ***** 2026-02-14 05:44:23.416521 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416527 | orchestrator | 2026-02-14 05:44:23.416534 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:44:23.416540 | orchestrator | Saturday 14 February 2026 05:44:18 +0000 (0:00:01.225) 0:07:31.054 ***** 2026-02-14 05:44:23.416547 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416554 | orchestrator | 2026-02-14 05:44:23.416560 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:44:23.416567 | orchestrator | Saturday 14 February 2026 05:44:19 +0000 (0:00:01.147) 0:07:32.202 ***** 2026-02-14 05:44:23.416573 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416580 | orchestrator | 2026-02-14 05:44:23.416586 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:44:23.416593 | orchestrator | Saturday 14 February 2026 05:44:21 +0000 (0:00:01.223) 0:07:33.426 ***** 2026-02-14 05:44:23.416599 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416606 | orchestrator | 2026-02-14 05:44:23.416612 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:44:23.416619 | orchestrator | Saturday 14 February 2026 05:44:22 +0000 (0:00:01.167) 0:07:34.594 ***** 2026-02-14 05:44:23.416626 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:44:23.416632 | orchestrator | 2026-02-14 05:44:23.416639 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:44:23.416645 | orchestrator | Saturday 14 February 2026 05:44:23 +0000 (0:00:01.136) 0:07:35.730 ***** 2026-02-14 05:45:16.287972 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288094 | orchestrator | 2026-02-14 05:45:16.288111 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:45:16.288124 | orchestrator | Saturday 14 February 2026 05:44:24 +0000 (0:00:01.141) 0:07:36.872 ***** 2026-02-14 05:45:16.288136 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288146 | orchestrator | 2026-02-14 05:45:16.288157 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:45:16.288193 | orchestrator | Saturday 14 February 2026 05:44:25 +0000 (0:00:01.150) 0:07:38.023 ***** 2026-02-14 05:45:16.288205 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.288216 | orchestrator | 2026-02-14 05:45:16.288227 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:45:16.288238 | orchestrator | Saturday 14 February 2026 05:44:27 +0000 (0:00:01.993) 0:07:40.017 ***** 2026-02-14 05:45:16.288248 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.288259 | orchestrator | 2026-02-14 05:45:16.288269 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:45:16.288280 | orchestrator | Saturday 14 February 2026 05:44:30 +0000 (0:00:02.518) 0:07:42.535 ***** 2026-02-14 05:45:16.288291 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-14 05:45:16.288303 | orchestrator | 2026-02-14 05:45:16.288350 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 05:45:16.288362 | orchestrator | Saturday 14 February 2026 05:44:31 +0000 (0:00:01.570) 0:07:44.105 ***** 2026-02-14 05:45:16.288374 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288385 | orchestrator | 2026-02-14 05:45:16.288411 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 05:45:16.288422 | orchestrator | Saturday 14 February 2026 05:44:33 +0000 (0:00:01.254) 0:07:45.360 ***** 2026-02-14 05:45:16.288433 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288443 | orchestrator | 2026-02-14 05:45:16.288454 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 05:45:16.288464 | orchestrator | Saturday 14 February 2026 05:44:34 +0000 (0:00:01.450) 0:07:46.811 ***** 2026-02-14 05:45:16.288475 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 05:45:16.288485 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 05:45:16.288497 | orchestrator | 2026-02-14 05:45:16.288507 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 05:45:16.288518 | orchestrator | Saturday 14 February 2026 05:44:36 +0000 (0:00:02.040) 0:07:48.854 ***** 2026-02-14 05:45:16.288529 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.288539 | orchestrator | 2026-02-14 05:45:16.288550 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 05:45:16.288560 | orchestrator | Saturday 14 February 2026 05:44:38 +0000 (0:00:01.798) 0:07:50.652 ***** 2026-02-14 05:45:16.288571 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288582 | orchestrator | 2026-02-14 05:45:16.288592 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 05:45:16.288603 | orchestrator | Saturday 14 February 2026 05:44:39 +0000 (0:00:01.180) 0:07:51.832 ***** 2026-02-14 05:45:16.288613 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288624 | orchestrator | 2026-02-14 05:45:16.288634 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:45:16.288645 | orchestrator | Saturday 14 February 2026 05:44:40 +0000 (0:00:01.165) 0:07:52.998 ***** 2026-02-14 05:45:16.288655 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288666 | orchestrator | 2026-02-14 05:45:16.288676 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:45:16.288687 | orchestrator | Saturday 14 February 2026 05:44:41 +0000 (0:00:01.181) 0:07:54.179 ***** 2026-02-14 05:45:16.288697 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-14 05:45:16.288708 | orchestrator | 2026-02-14 05:45:16.288719 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 05:45:16.288729 | orchestrator | Saturday 14 February 2026 05:44:43 +0000 (0:00:01.526) 0:07:55.705 ***** 2026-02-14 05:45:16.288739 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.288750 | orchestrator | 2026-02-14 05:45:16.288761 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 05:45:16.288782 | orchestrator | Saturday 14 February 2026 05:44:45 +0000 (0:00:01.720) 0:07:57.431 ***** 2026-02-14 05:45:16.288793 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 05:45:16.288804 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 05:45:16.288814 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 05:45:16.288825 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288835 | orchestrator | 2026-02-14 05:45:16.288846 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 05:45:16.288856 | orchestrator | Saturday 14 February 2026 05:44:46 +0000 (0:00:01.158) 0:07:58.590 ***** 2026-02-14 05:45:16.288867 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288877 | orchestrator | 2026-02-14 05:45:16.288888 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 05:45:16.288898 | orchestrator | Saturday 14 February 2026 05:44:47 +0000 (0:00:01.123) 0:07:59.714 ***** 2026-02-14 05:45:16.288909 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288919 | orchestrator | 2026-02-14 05:45:16.288930 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 05:45:16.288940 | orchestrator | Saturday 14 February 2026 05:44:48 +0000 (0:00:01.154) 0:08:00.868 ***** 2026-02-14 05:45:16.288951 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.288962 | orchestrator | 2026-02-14 05:45:16.288973 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 05:45:16.289000 | orchestrator | Saturday 14 February 2026 05:44:49 +0000 (0:00:01.149) 0:08:02.018 ***** 2026-02-14 05:45:16.289011 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289022 | orchestrator | 2026-02-14 05:45:16.289033 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 05:45:16.289044 | orchestrator | Saturday 14 February 2026 05:44:50 +0000 (0:00:01.237) 0:08:03.255 ***** 2026-02-14 05:45:16.289054 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289065 | orchestrator | 2026-02-14 05:45:16.289075 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:45:16.289086 | orchestrator | Saturday 14 February 2026 05:44:52 +0000 (0:00:01.136) 0:08:04.392 ***** 2026-02-14 05:45:16.289096 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.289107 | orchestrator | 2026-02-14 05:45:16.289140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:45:16.289152 | orchestrator | Saturday 14 February 2026 05:44:54 +0000 (0:00:02.491) 0:08:06.884 ***** 2026-02-14 05:45:16.289163 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.289173 | orchestrator | 2026-02-14 05:45:16.289184 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:45:16.289194 | orchestrator | Saturday 14 February 2026 05:44:55 +0000 (0:00:01.180) 0:08:08.064 ***** 2026-02-14 05:45:16.289205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-14 05:45:16.289215 | orchestrator | 2026-02-14 05:45:16.289226 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 05:45:16.289236 | orchestrator | Saturday 14 February 2026 05:44:57 +0000 (0:00:01.479) 0:08:09.544 ***** 2026-02-14 05:45:16.289253 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289264 | orchestrator | 2026-02-14 05:45:16.289274 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 05:45:16.289285 | orchestrator | Saturday 14 February 2026 05:44:58 +0000 (0:00:01.129) 0:08:10.673 ***** 2026-02-14 05:45:16.289295 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289329 | orchestrator | 2026-02-14 05:45:16.289350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 05:45:16.289362 | orchestrator | Saturday 14 February 2026 05:44:59 +0000 (0:00:01.141) 0:08:11.815 ***** 2026-02-14 05:45:16.289373 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289384 | orchestrator | 2026-02-14 05:45:16.289402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 05:45:16.289413 | orchestrator | Saturday 14 February 2026 05:45:00 +0000 (0:00:01.153) 0:08:12.969 ***** 2026-02-14 05:45:16.289423 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289434 | orchestrator | 2026-02-14 05:45:16.289445 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 05:45:16.289455 | orchestrator | Saturday 14 February 2026 05:45:01 +0000 (0:00:01.271) 0:08:14.240 ***** 2026-02-14 05:45:16.289466 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289477 | orchestrator | 2026-02-14 05:45:16.289488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 05:45:16.289498 | orchestrator | Saturday 14 February 2026 05:45:03 +0000 (0:00:01.150) 0:08:15.391 ***** 2026-02-14 05:45:16.289509 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289519 | orchestrator | 2026-02-14 05:45:16.289530 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 05:45:16.289540 | orchestrator | Saturday 14 February 2026 05:45:04 +0000 (0:00:01.171) 0:08:16.563 ***** 2026-02-14 05:45:16.289551 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289562 | orchestrator | 2026-02-14 05:45:16.289572 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 05:45:16.289583 | orchestrator | Saturday 14 February 2026 05:45:05 +0000 (0:00:01.237) 0:08:17.801 ***** 2026-02-14 05:45:16.289593 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:45:16.289604 | orchestrator | 2026-02-14 05:45:16.289615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 05:45:16.289625 | orchestrator | Saturday 14 February 2026 05:45:06 +0000 (0:00:01.148) 0:08:18.950 ***** 2026-02-14 05:45:16.289636 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:45:16.289647 | orchestrator | 2026-02-14 05:45:16.289657 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:45:16.289668 | orchestrator | Saturday 14 February 2026 05:45:07 +0000 (0:00:01.155) 0:08:20.106 ***** 2026-02-14 05:45:16.289679 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-14 05:45:16.289689 | orchestrator | 2026-02-14 05:45:16.289700 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 05:45:16.289711 | orchestrator | Saturday 14 February 2026 05:45:09 +0000 (0:00:01.612) 0:08:21.718 ***** 2026-02-14 05:45:16.289721 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-14 05:45:16.289732 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-14 05:45:16.289743 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-14 05:45:16.289754 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-14 05:45:16.289764 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-14 05:45:16.289775 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-14 05:45:16.289785 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-14 05:45:16.289796 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-14 05:45:16.289807 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 05:45:16.289817 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 05:45:16.289828 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 05:45:16.289839 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 05:45:16.289849 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 05:45:16.289860 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 05:45:16.289878 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-14 05:46:04.412356 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-14 05:46:04.412473 | orchestrator | 2026-02-14 05:46:04.412492 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:46:04.412531 | orchestrator | Saturday 14 February 2026 05:45:16 +0000 (0:00:06.874) 0:08:28.593 ***** 2026-02-14 05:46:04.412544 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412557 | orchestrator | 2026-02-14 05:46:04.412568 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:46:04.412579 | orchestrator | Saturday 14 February 2026 05:45:17 +0000 (0:00:01.139) 0:08:29.732 ***** 2026-02-14 05:46:04.412590 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412601 | orchestrator | 2026-02-14 05:46:04.412612 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:46:04.412623 | orchestrator | Saturday 14 February 2026 05:45:18 +0000 (0:00:01.158) 0:08:30.891 ***** 2026-02-14 05:46:04.412634 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412644 | orchestrator | 2026-02-14 05:46:04.412655 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:46:04.412666 | orchestrator | Saturday 14 February 2026 05:45:19 +0000 (0:00:01.150) 0:08:32.042 ***** 2026-02-14 05:46:04.412677 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412687 | orchestrator | 2026-02-14 05:46:04.412698 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:46:04.412709 | orchestrator | Saturday 14 February 2026 05:45:20 +0000 (0:00:01.167) 0:08:33.209 ***** 2026-02-14 05:46:04.412720 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412730 | orchestrator | 2026-02-14 05:46:04.412757 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:46:04.412769 | orchestrator | Saturday 14 February 2026 05:45:21 +0000 (0:00:01.111) 0:08:34.321 ***** 2026-02-14 05:46:04.412779 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412790 | orchestrator | 2026-02-14 05:46:04.412801 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:46:04.412813 | orchestrator | Saturday 14 February 2026 05:45:23 +0000 (0:00:01.136) 0:08:35.458 ***** 2026-02-14 05:46:04.412824 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412835 | orchestrator | 2026-02-14 05:46:04.412848 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:46:04.412861 | orchestrator | Saturday 14 February 2026 05:45:24 +0000 (0:00:01.150) 0:08:36.608 ***** 2026-02-14 05:46:04.412874 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412888 | orchestrator | 2026-02-14 05:46:04.412901 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:46:04.412913 | orchestrator | Saturday 14 February 2026 05:45:25 +0000 (0:00:01.183) 0:08:37.792 ***** 2026-02-14 05:46:04.412926 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412940 | orchestrator | 2026-02-14 05:46:04.412952 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:46:04.412965 | orchestrator | Saturday 14 February 2026 05:45:26 +0000 (0:00:01.128) 0:08:38.921 ***** 2026-02-14 05:46:04.412977 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.412990 | orchestrator | 2026-02-14 05:46:04.413002 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:46:04.413015 | orchestrator | Saturday 14 February 2026 05:45:27 +0000 (0:00:01.138) 0:08:40.059 ***** 2026-02-14 05:46:04.413027 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413039 | orchestrator | 2026-02-14 05:46:04.413051 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 05:46:04.413063 | orchestrator | Saturday 14 February 2026 05:45:28 +0000 (0:00:01.136) 0:08:41.196 ***** 2026-02-14 05:46:04.413077 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413090 | orchestrator | 2026-02-14 05:46:04.413102 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 05:46:04.413115 | orchestrator | Saturday 14 February 2026 05:45:29 +0000 (0:00:01.131) 0:08:42.327 ***** 2026-02-14 05:46:04.413127 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413149 | orchestrator | 2026-02-14 05:46:04.413163 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 05:46:04.413176 | orchestrator | Saturday 14 February 2026 05:45:31 +0000 (0:00:01.297) 0:08:43.624 ***** 2026-02-14 05:46:04.413189 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413200 | orchestrator | 2026-02-14 05:46:04.413211 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 05:46:04.413222 | orchestrator | Saturday 14 February 2026 05:45:32 +0000 (0:00:01.153) 0:08:44.778 ***** 2026-02-14 05:46:04.413232 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413243 | orchestrator | 2026-02-14 05:46:04.413254 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 05:46:04.413282 | orchestrator | Saturday 14 February 2026 05:45:33 +0000 (0:00:01.309) 0:08:46.087 ***** 2026-02-14 05:46:04.413294 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413305 | orchestrator | 2026-02-14 05:46:04.413315 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 05:46:04.413326 | orchestrator | Saturday 14 February 2026 05:45:34 +0000 (0:00:01.109) 0:08:47.197 ***** 2026-02-14 05:46:04.413337 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413348 | orchestrator | 2026-02-14 05:46:04.413359 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:46:04.413371 | orchestrator | Saturday 14 February 2026 05:45:36 +0000 (0:00:01.143) 0:08:48.340 ***** 2026-02-14 05:46:04.413382 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413393 | orchestrator | 2026-02-14 05:46:04.413403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:46:04.413414 | orchestrator | Saturday 14 February 2026 05:45:37 +0000 (0:00:01.135) 0:08:49.476 ***** 2026-02-14 05:46:04.413425 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413436 | orchestrator | 2026-02-14 05:46:04.413465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:46:04.413476 | orchestrator | Saturday 14 February 2026 05:45:38 +0000 (0:00:01.169) 0:08:50.646 ***** 2026-02-14 05:46:04.413487 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413498 | orchestrator | 2026-02-14 05:46:04.413509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:46:04.413520 | orchestrator | Saturday 14 February 2026 05:45:39 +0000 (0:00:01.152) 0:08:51.798 ***** 2026-02-14 05:46:04.413530 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413541 | orchestrator | 2026-02-14 05:46:04.413552 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:46:04.413562 | orchestrator | Saturday 14 February 2026 05:45:40 +0000 (0:00:01.147) 0:08:52.946 ***** 2026-02-14 05:46:04.413573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:46:04.413585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:46:04.413596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:46:04.413606 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413617 | orchestrator | 2026-02-14 05:46:04.413628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:46:04.413639 | orchestrator | Saturday 14 February 2026 05:45:42 +0000 (0:00:01.806) 0:08:54.753 ***** 2026-02-14 05:46:04.413649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:46:04.413660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:46:04.413671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:46:04.413687 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413698 | orchestrator | 2026-02-14 05:46:04.413708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:46:04.413719 | orchestrator | Saturday 14 February 2026 05:45:43 +0000 (0:00:01.405) 0:08:56.158 ***** 2026-02-14 05:46:04.413730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:46:04.413747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:46:04.413758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:46:04.413769 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413779 | orchestrator | 2026-02-14 05:46:04.413790 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:46:04.413801 | orchestrator | Saturday 14 February 2026 05:45:45 +0000 (0:00:01.416) 0:08:57.575 ***** 2026-02-14 05:46:04.413812 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413822 | orchestrator | 2026-02-14 05:46:04.413833 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:46:04.413844 | orchestrator | Saturday 14 February 2026 05:45:46 +0000 (0:00:01.125) 0:08:58.701 ***** 2026-02-14 05:46:04.413854 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-14 05:46:04.413865 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.413875 | orchestrator | 2026-02-14 05:46:04.413886 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 05:46:04.413897 | orchestrator | Saturday 14 February 2026 05:45:47 +0000 (0:00:01.436) 0:09:00.138 ***** 2026-02-14 05:46:04.413908 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.413919 | orchestrator | 2026-02-14 05:46:04.413929 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-14 05:46:04.413940 | orchestrator | Saturday 14 February 2026 05:45:49 +0000 (0:00:01.727) 0:09:01.865 ***** 2026-02-14 05:46:04.413951 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.413962 | orchestrator | 2026-02-14 05:46:04.413972 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-14 05:46:04.413983 | orchestrator | Saturday 14 February 2026 05:45:50 +0000 (0:00:01.168) 0:09:03.034 ***** 2026-02-14 05:46:04.413994 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-14 05:46:04.414005 | orchestrator | 2026-02-14 05:46:04.414073 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-14 05:46:04.414086 | orchestrator | Saturday 14 February 2026 05:45:52 +0000 (0:00:01.572) 0:09:04.606 ***** 2026-02-14 05:46:04.414097 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-14 05:46:04.414108 | orchestrator | 2026-02-14 05:46:04.414119 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-14 05:46:04.414130 | orchestrator | Saturday 14 February 2026 05:45:55 +0000 (0:00:03.550) 0:09:08.157 ***** 2026-02-14 05:46:04.414149 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:46:04.414167 | orchestrator | 2026-02-14 05:46:04.414184 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-14 05:46:04.414202 | orchestrator | Saturday 14 February 2026 05:45:56 +0000 (0:00:01.131) 0:09:09.288 ***** 2026-02-14 05:46:04.414220 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.414237 | orchestrator | 2026-02-14 05:46:04.414254 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-14 05:46:04.414296 | orchestrator | Saturday 14 February 2026 05:45:58 +0000 (0:00:01.127) 0:09:10.416 ***** 2026-02-14 05:46:04.414316 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.414334 | orchestrator | 2026-02-14 05:46:04.414349 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-14 05:46:04.414365 | orchestrator | Saturday 14 February 2026 05:45:59 +0000 (0:00:01.168) 0:09:11.585 ***** 2026-02-14 05:46:04.414381 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:46:04.414400 | orchestrator | 2026-02-14 05:46:04.414418 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-14 05:46:04.414437 | orchestrator | Saturday 14 February 2026 05:46:01 +0000 (0:00:02.057) 0:09:13.643 ***** 2026-02-14 05:46:04.414455 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.414474 | orchestrator | 2026-02-14 05:46:04.414492 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-14 05:46:04.414511 | orchestrator | Saturday 14 February 2026 05:46:02 +0000 (0:00:01.601) 0:09:15.245 ***** 2026-02-14 05:46:04.414542 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:46:04.414560 | orchestrator | 2026-02-14 05:46:04.414594 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-14 05:47:02.349313 | orchestrator | Saturday 14 February 2026 05:46:04 +0000 (0:00:01.479) 0:09:16.724 ***** 2026-02-14 05:47:02.349433 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349465 | orchestrator | 2026-02-14 05:47:02.349479 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-14 05:47:02.349502 | orchestrator | Saturday 14 February 2026 05:46:05 +0000 (0:00:01.498) 0:09:18.223 ***** 2026-02-14 05:47:02.349513 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349525 | orchestrator | 2026-02-14 05:47:02.349536 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-14 05:47:02.349547 | orchestrator | Saturday 14 February 2026 05:46:07 +0000 (0:00:01.739) 0:09:19.962 ***** 2026-02-14 05:47:02.349558 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349569 | orchestrator | 2026-02-14 05:47:02.349580 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-14 05:47:02.349591 | orchestrator | Saturday 14 February 2026 05:46:09 +0000 (0:00:01.748) 0:09:21.711 ***** 2026-02-14 05:47:02.349603 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 05:47:02.349614 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 05:47:02.349625 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 05:47:02.349637 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-14 05:47:02.349648 | orchestrator | 2026-02-14 05:47:02.349659 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-14 05:47:02.349670 | orchestrator | Saturday 14 February 2026 05:46:13 +0000 (0:00:03.968) 0:09:25.680 ***** 2026-02-14 05:47:02.349699 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:47:02.349710 | orchestrator | 2026-02-14 05:47:02.349722 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-14 05:47:02.349733 | orchestrator | Saturday 14 February 2026 05:46:15 +0000 (0:00:02.047) 0:09:27.728 ***** 2026-02-14 05:47:02.349743 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349754 | orchestrator | 2026-02-14 05:47:02.349765 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-14 05:47:02.349776 | orchestrator | Saturday 14 February 2026 05:46:16 +0000 (0:00:01.194) 0:09:28.922 ***** 2026-02-14 05:47:02.349787 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349798 | orchestrator | 2026-02-14 05:47:02.349812 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-14 05:47:02.349825 | orchestrator | Saturday 14 February 2026 05:46:17 +0000 (0:00:01.183) 0:09:30.106 ***** 2026-02-14 05:47:02.349837 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349850 | orchestrator | 2026-02-14 05:47:02.349863 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-14 05:47:02.349876 | orchestrator | Saturday 14 February 2026 05:46:19 +0000 (0:00:02.104) 0:09:32.211 ***** 2026-02-14 05:47:02.349888 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.349898 | orchestrator | 2026-02-14 05:47:02.349909 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-14 05:47:02.349920 | orchestrator | Saturday 14 February 2026 05:46:21 +0000 (0:00:01.585) 0:09:33.796 ***** 2026-02-14 05:47:02.349931 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:02.349942 | orchestrator | 2026-02-14 05:47:02.349953 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-14 05:47:02.349964 | orchestrator | Saturday 14 February 2026 05:46:22 +0000 (0:00:01.177) 0:09:34.973 ***** 2026-02-14 05:47:02.349974 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-14 05:47:02.349986 | orchestrator | 2026-02-14 05:47:02.349997 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-14 05:47:02.350008 | orchestrator | Saturday 14 February 2026 05:46:24 +0000 (0:00:01.563) 0:09:36.537 ***** 2026-02-14 05:47:02.350104 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:02.350118 | orchestrator | 2026-02-14 05:47:02.350129 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-14 05:47:02.350140 | orchestrator | Saturday 14 February 2026 05:46:25 +0000 (0:00:01.129) 0:09:37.667 ***** 2026-02-14 05:47:02.350151 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:02.350161 | orchestrator | 2026-02-14 05:47:02.350172 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-14 05:47:02.350183 | orchestrator | Saturday 14 February 2026 05:46:26 +0000 (0:00:01.138) 0:09:38.805 ***** 2026-02-14 05:47:02.350193 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-14 05:47:02.350204 | orchestrator | 2026-02-14 05:47:02.350253 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-14 05:47:02.350265 | orchestrator | Saturday 14 February 2026 05:46:28 +0000 (0:00:01.583) 0:09:40.389 ***** 2026-02-14 05:47:02.350275 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.350286 | orchestrator | 2026-02-14 05:47:02.350297 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-14 05:47:02.350308 | orchestrator | Saturday 14 February 2026 05:46:30 +0000 (0:00:02.292) 0:09:42.681 ***** 2026-02-14 05:47:02.350318 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.350329 | orchestrator | 2026-02-14 05:47:02.350340 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-14 05:47:02.350351 | orchestrator | Saturday 14 February 2026 05:46:32 +0000 (0:00:01.973) 0:09:44.654 ***** 2026-02-14 05:47:02.350362 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.350373 | orchestrator | 2026-02-14 05:47:02.350383 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-14 05:47:02.350394 | orchestrator | Saturday 14 February 2026 05:46:34 +0000 (0:00:02.569) 0:09:47.224 ***** 2026-02-14 05:47:02.350405 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:47:02.350415 | orchestrator | 2026-02-14 05:47:02.350426 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-14 05:47:02.350437 | orchestrator | Saturday 14 February 2026 05:46:38 +0000 (0:00:03.248) 0:09:50.472 ***** 2026-02-14 05:47:02.350448 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-14 05:47:02.350459 | orchestrator | 2026-02-14 05:47:02.350487 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-14 05:47:02.350499 | orchestrator | Saturday 14 February 2026 05:46:39 +0000 (0:00:01.703) 0:09:52.176 ***** 2026-02-14 05:47:02.350509 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.350520 | orchestrator | 2026-02-14 05:47:02.350531 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-14 05:47:02.350542 | orchestrator | Saturday 14 February 2026 05:46:42 +0000 (0:00:02.228) 0:09:54.405 ***** 2026-02-14 05:47:02.350552 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:02.350563 | orchestrator | 2026-02-14 05:47:02.350573 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-14 05:47:02.350584 | orchestrator | Saturday 14 February 2026 05:46:45 +0000 (0:00:03.027) 0:09:57.433 ***** 2026-02-14 05:47:02.350595 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:02.350606 | orchestrator | 2026-02-14 05:47:02.350616 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-14 05:47:02.350627 | orchestrator | Saturday 14 February 2026 05:46:46 +0000 (0:00:01.199) 0:09:58.633 ***** 2026-02-14 05:47:02.350640 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:47:02.350661 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:47:02.350680 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-14 05:47:02.350691 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-14 05:47:02.350703 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-14 05:47:02.350715 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}])  2026-02-14 05:47:02.350728 | orchestrator | 2026-02-14 05:47:02.350739 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-14 05:47:02.350750 | orchestrator | Saturday 14 February 2026 05:46:56 +0000 (0:00:09.840) 0:10:08.473 ***** 2026-02-14 05:47:02.350761 | orchestrator | changed: [testbed-node-0] 2026-02-14 05:47:02.350772 | orchestrator | 2026-02-14 05:47:02.350782 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:47:02.350793 | orchestrator | Saturday 14 February 2026 05:46:58 +0000 (0:00:02.496) 0:10:10.970 ***** 2026-02-14 05:47:02.350804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 05:47:02.350815 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 05:47:02.350825 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 05:47:02.350836 | orchestrator | 2026-02-14 05:47:02.350847 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:47:02.350857 | orchestrator | Saturday 14 February 2026 05:47:00 +0000 (0:00:02.264) 0:10:13.235 ***** 2026-02-14 05:47:02.350868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:47:02.350879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:47:02.350890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:47:02.350901 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:02.350911 | orchestrator | 2026-02-14 05:47:02.350922 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-14 05:47:02.350939 | orchestrator | Saturday 14 February 2026 05:47:02 +0000 (0:00:01.424) 0:10:14.660 ***** 2026-02-14 05:47:32.055439 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:47:32.055560 | orchestrator | 2026-02-14 05:47:32.055578 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-14 05:47:32.055591 | orchestrator | Saturday 14 February 2026 05:47:03 +0000 (0:00:01.127) 0:10:15.787 ***** 2026-02-14 05:47:32.055603 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:47:32.055615 | orchestrator | 2026-02-14 05:47:32.055649 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-14 05:47:32.055661 | orchestrator | 2026-02-14 05:47:32.055672 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-14 05:47:32.055683 | orchestrator | Saturday 14 February 2026 05:47:05 +0000 (0:00:02.186) 0:10:17.973 ***** 2026-02-14 05:47:32.055693 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.055704 | orchestrator | 2026-02-14 05:47:32.055715 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-14 05:47:32.055726 | orchestrator | Saturday 14 February 2026 05:47:06 +0000 (0:00:01.173) 0:10:19.147 ***** 2026-02-14 05:47:32.055737 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.055748 | orchestrator | 2026-02-14 05:47:32.055759 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-14 05:47:32.055770 | orchestrator | Saturday 14 February 2026 05:47:07 +0000 (0:00:00.801) 0:10:19.949 ***** 2026-02-14 05:47:32.055781 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:32.055791 | orchestrator | 2026-02-14 05:47:32.055802 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-14 05:47:32.055813 | orchestrator | Saturday 14 February 2026 05:47:08 +0000 (0:00:00.813) 0:10:20.763 ***** 2026-02-14 05:47:32.055838 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.055850 | orchestrator | 2026-02-14 05:47:32.055860 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:47:32.055871 | orchestrator | Saturday 14 February 2026 05:47:09 +0000 (0:00:00.801) 0:10:21.564 ***** 2026-02-14 05:47:32.055882 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-14 05:47:32.055892 | orchestrator | 2026-02-14 05:47:32.055903 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 05:47:32.055914 | orchestrator | Saturday 14 February 2026 05:47:10 +0000 (0:00:01.180) 0:10:22.745 ***** 2026-02-14 05:47:32.055924 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.055935 | orchestrator | 2026-02-14 05:47:32.055946 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 05:47:32.055956 | orchestrator | Saturday 14 February 2026 05:47:11 +0000 (0:00:01.454) 0:10:24.200 ***** 2026-02-14 05:47:32.055969 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.055982 | orchestrator | 2026-02-14 05:47:32.055994 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 05:47:32.056007 | orchestrator | Saturday 14 February 2026 05:47:13 +0000 (0:00:01.193) 0:10:25.393 ***** 2026-02-14 05:47:32.056019 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056032 | orchestrator | 2026-02-14 05:47:32.056045 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 05:47:32.056058 | orchestrator | Saturday 14 February 2026 05:47:14 +0000 (0:00:01.530) 0:10:26.923 ***** 2026-02-14 05:47:32.056070 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056083 | orchestrator | 2026-02-14 05:47:32.056095 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 05:47:32.056108 | orchestrator | Saturday 14 February 2026 05:47:15 +0000 (0:00:01.243) 0:10:28.167 ***** 2026-02-14 05:47:32.056121 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056133 | orchestrator | 2026-02-14 05:47:32.056145 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 05:47:32.056158 | orchestrator | Saturday 14 February 2026 05:47:17 +0000 (0:00:01.190) 0:10:29.357 ***** 2026-02-14 05:47:32.056170 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056183 | orchestrator | 2026-02-14 05:47:32.056223 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 05:47:32.056236 | orchestrator | Saturday 14 February 2026 05:47:18 +0000 (0:00:01.254) 0:10:30.612 ***** 2026-02-14 05:47:32.056249 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:32.056262 | orchestrator | 2026-02-14 05:47:32.056275 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 05:47:32.056288 | orchestrator | Saturday 14 February 2026 05:47:19 +0000 (0:00:01.171) 0:10:31.783 ***** 2026-02-14 05:47:32.056311 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056324 | orchestrator | 2026-02-14 05:47:32.056336 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 05:47:32.056347 | orchestrator | Saturday 14 February 2026 05:47:20 +0000 (0:00:01.337) 0:10:33.121 ***** 2026-02-14 05:47:32.056358 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:47:32.056369 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:47:32.056380 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:47:32.056391 | orchestrator | 2026-02-14 05:47:32.056402 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 05:47:32.056412 | orchestrator | Saturday 14 February 2026 05:47:22 +0000 (0:00:01.722) 0:10:34.843 ***** 2026-02-14 05:47:32.056423 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:32.056434 | orchestrator | 2026-02-14 05:47:32.056444 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 05:47:32.056455 | orchestrator | Saturday 14 February 2026 05:47:23 +0000 (0:00:01.249) 0:10:36.093 ***** 2026-02-14 05:47:32.056466 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:47:32.056476 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:47:32.056487 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:47:32.056498 | orchestrator | 2026-02-14 05:47:32.056508 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 05:47:32.056519 | orchestrator | Saturday 14 February 2026 05:47:26 +0000 (0:00:02.845) 0:10:38.939 ***** 2026-02-14 05:47:32.056548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:47:32.056560 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:47:32.056571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:47:32.056582 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:32.056592 | orchestrator | 2026-02-14 05:47:32.056603 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 05:47:32.056614 | orchestrator | Saturday 14 February 2026 05:47:28 +0000 (0:00:01.475) 0:10:40.415 ***** 2026-02-14 05:47:32.056630 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056669 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056680 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:32.056691 | orchestrator | 2026-02-14 05:47:32.056701 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 05:47:32.056712 | orchestrator | Saturday 14 February 2026 05:47:29 +0000 (0:00:01.598) 0:10:42.013 ***** 2026-02-14 05:47:32.056725 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056739 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056758 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:32.056769 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:32.056780 | orchestrator | 2026-02-14 05:47:32.056791 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 05:47:32.056802 | orchestrator | Saturday 14 February 2026 05:47:30 +0000 (0:00:01.166) 0:10:43.180 ***** 2026-02-14 05:47:32.056815 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 05:47:24.306733', 'end': '2026-02-14 05:47:24.357640', 'delta': '0:00:00.050907', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 05:47:32.056838 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '26dcb1313f5c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 05:47:24.872572', 'end': '2026-02-14 05:47:24.925368', 'delta': '0:00:00.052796', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26dcb1313f5c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 05:47:51.438309 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '7aff8e7c54ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 05:47:25.422754', 'end': '2026-02-14 05:47:25.473744', 'delta': '0:00:00.050990', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7aff8e7c54ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 05:47:51.438460 | orchestrator | 2026-02-14 05:47:51.438488 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 05:47:51.438532 | orchestrator | Saturday 14 February 2026 05:47:32 +0000 (0:00:01.188) 0:10:44.368 ***** 2026-02-14 05:47:51.438552 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:51.438575 | orchestrator | 2026-02-14 05:47:51.438595 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 05:47:51.438614 | orchestrator | Saturday 14 February 2026 05:47:33 +0000 (0:00:01.312) 0:10:45.681 ***** 2026-02-14 05:47:51.438633 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.438685 | orchestrator | 2026-02-14 05:47:51.438706 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 05:47:51.438724 | orchestrator | Saturday 14 February 2026 05:47:34 +0000 (0:00:01.335) 0:10:47.017 ***** 2026-02-14 05:47:51.438742 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:51.438760 | orchestrator | 2026-02-14 05:47:51.438778 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 05:47:51.438797 | orchestrator | Saturday 14 February 2026 05:47:35 +0000 (0:00:01.292) 0:10:48.310 ***** 2026-02-14 05:47:51.438849 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-14 05:47:51.438869 | orchestrator | 2026-02-14 05:47:51.438889 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:47:51.438909 | orchestrator | Saturday 14 February 2026 05:47:38 +0000 (0:00:02.076) 0:10:50.387 ***** 2026-02-14 05:47:51.438927 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:47:51.438946 | orchestrator | 2026-02-14 05:47:51.438964 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 05:47:51.438983 | orchestrator | Saturday 14 February 2026 05:47:39 +0000 (0:00:01.140) 0:10:51.527 ***** 2026-02-14 05:47:51.439002 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439020 | orchestrator | 2026-02-14 05:47:51.439038 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 05:47:51.439058 | orchestrator | Saturday 14 February 2026 05:47:40 +0000 (0:00:01.372) 0:10:52.899 ***** 2026-02-14 05:47:51.439076 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439094 | orchestrator | 2026-02-14 05:47:51.439110 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:47:51.439121 | orchestrator | Saturday 14 February 2026 05:47:41 +0000 (0:00:01.218) 0:10:54.118 ***** 2026-02-14 05:47:51.439131 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439142 | orchestrator | 2026-02-14 05:47:51.439153 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 05:47:51.439163 | orchestrator | Saturday 14 February 2026 05:47:42 +0000 (0:00:01.152) 0:10:55.271 ***** 2026-02-14 05:47:51.439174 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439227 | orchestrator | 2026-02-14 05:47:51.439239 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 05:47:51.439250 | orchestrator | Saturday 14 February 2026 05:47:44 +0000 (0:00:01.226) 0:10:56.497 ***** 2026-02-14 05:47:51.439260 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439271 | orchestrator | 2026-02-14 05:47:51.439282 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 05:47:51.439293 | orchestrator | Saturday 14 February 2026 05:47:45 +0000 (0:00:01.229) 0:10:57.728 ***** 2026-02-14 05:47:51.439303 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439314 | orchestrator | 2026-02-14 05:47:51.439326 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 05:47:51.439337 | orchestrator | Saturday 14 February 2026 05:47:46 +0000 (0:00:01.126) 0:10:58.855 ***** 2026-02-14 05:47:51.439347 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439358 | orchestrator | 2026-02-14 05:47:51.439370 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 05:47:51.439380 | orchestrator | Saturday 14 February 2026 05:47:47 +0000 (0:00:01.166) 0:11:00.022 ***** 2026-02-14 05:47:51.439391 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439402 | orchestrator | 2026-02-14 05:47:51.439413 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 05:47:51.439425 | orchestrator | Saturday 14 February 2026 05:47:48 +0000 (0:00:01.142) 0:11:01.164 ***** 2026-02-14 05:47:51.439435 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:51.439446 | orchestrator | 2026-02-14 05:47:51.439456 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 05:47:51.439467 | orchestrator | Saturday 14 February 2026 05:47:50 +0000 (0:00:01.198) 0:11:02.363 ***** 2026-02-14 05:47:51.439517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:47:51.439579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:51.439627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:47:52.696549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:52.696666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:47:52.696694 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:47:52.696710 | orchestrator | 2026-02-14 05:47:52.696722 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 05:47:52.696734 | orchestrator | Saturday 14 February 2026 05:47:51 +0000 (0:00:01.385) 0:11:03.749 ***** 2026-02-14 05:47:52.696748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696762 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696774 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696812 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696852 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696865 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696876 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696892 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:47:52.696927 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:48:29.279406 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:48:29.279524 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.279542 | orchestrator | 2026-02-14 05:48:29.279555 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 05:48:29.279568 | orchestrator | Saturday 14 February 2026 05:47:52 +0000 (0:00:01.263) 0:11:05.012 ***** 2026-02-14 05:48:29.279579 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:48:29.279591 | orchestrator | 2026-02-14 05:48:29.279602 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 05:48:29.279613 | orchestrator | Saturday 14 February 2026 05:47:54 +0000 (0:00:01.515) 0:11:06.527 ***** 2026-02-14 05:48:29.279623 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:48:29.279634 | orchestrator | 2026-02-14 05:48:29.279645 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:48:29.279656 | orchestrator | Saturday 14 February 2026 05:47:55 +0000 (0:00:01.162) 0:11:07.690 ***** 2026-02-14 05:48:29.279667 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:48:29.279678 | orchestrator | 2026-02-14 05:48:29.279688 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:48:29.279699 | orchestrator | Saturday 14 February 2026 05:47:57 +0000 (0:00:01.755) 0:11:09.447 ***** 2026-02-14 05:48:29.279710 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.279745 | orchestrator | 2026-02-14 05:48:29.279757 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:48:29.279768 | orchestrator | Saturday 14 February 2026 05:47:58 +0000 (0:00:01.130) 0:11:10.579 ***** 2026-02-14 05:48:29.279779 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.279789 | orchestrator | 2026-02-14 05:48:29.279800 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:48:29.279811 | orchestrator | Saturday 14 February 2026 05:47:59 +0000 (0:00:01.306) 0:11:11.885 ***** 2026-02-14 05:48:29.279822 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.279833 | orchestrator | 2026-02-14 05:48:29.279851 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:48:29.279869 | orchestrator | Saturday 14 February 2026 05:48:00 +0000 (0:00:01.133) 0:11:13.019 ***** 2026-02-14 05:48:29.279888 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-14 05:48:29.279907 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:48:29.279926 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-14 05:48:29.279939 | orchestrator | 2026-02-14 05:48:29.279952 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:48:29.279965 | orchestrator | Saturday 14 February 2026 05:48:02 +0000 (0:00:01.778) 0:11:14.797 ***** 2026-02-14 05:48:29.279978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:48:29.279991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:48:29.280003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:48:29.280015 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280027 | orchestrator | 2026-02-14 05:48:29.280039 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 05:48:29.280052 | orchestrator | Saturday 14 February 2026 05:48:03 +0000 (0:00:01.226) 0:11:16.024 ***** 2026-02-14 05:48:29.280064 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280076 | orchestrator | 2026-02-14 05:48:29.280089 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 05:48:29.280102 | orchestrator | Saturday 14 February 2026 05:48:04 +0000 (0:00:01.184) 0:11:17.209 ***** 2026-02-14 05:48:29.280114 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:48:29.280127 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:48:29.280139 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:48:29.280175 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:48:29.280190 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:48:29.280204 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:48:29.280214 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:48:29.280225 | orchestrator | 2026-02-14 05:48:29.280236 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 05:48:29.280247 | orchestrator | Saturday 14 February 2026 05:48:07 +0000 (0:00:02.555) 0:11:19.764 ***** 2026-02-14 05:48:29.280257 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:48:29.280268 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:48:29.280279 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 05:48:29.280290 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:48:29.280321 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:48:29.280332 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:48:29.280343 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:48:29.280364 | orchestrator | 2026-02-14 05:48:29.280375 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-14 05:48:29.280385 | orchestrator | Saturday 14 February 2026 05:48:09 +0000 (0:00:02.551) 0:11:22.316 ***** 2026-02-14 05:48:29.280396 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280407 | orchestrator | 2026-02-14 05:48:29.280418 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-14 05:48:29.280428 | orchestrator | Saturday 14 February 2026 05:48:10 +0000 (0:00:00.871) 0:11:23.188 ***** 2026-02-14 05:48:29.280439 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280450 | orchestrator | 2026-02-14 05:48:29.280460 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-14 05:48:29.280471 | orchestrator | Saturday 14 February 2026 05:48:11 +0000 (0:00:00.940) 0:11:24.128 ***** 2026-02-14 05:48:29.280482 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280492 | orchestrator | 2026-02-14 05:48:29.280598 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-14 05:48:29.280619 | orchestrator | Saturday 14 February 2026 05:48:12 +0000 (0:00:00.811) 0:11:24.940 ***** 2026-02-14 05:48:29.280631 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280642 | orchestrator | 2026-02-14 05:48:29.280652 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-14 05:48:29.280663 | orchestrator | Saturday 14 February 2026 05:48:14 +0000 (0:00:01.573) 0:11:26.513 ***** 2026-02-14 05:48:29.280674 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280685 | orchestrator | 2026-02-14 05:48:29.280696 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-14 05:48:29.280707 | orchestrator | Saturday 14 February 2026 05:48:15 +0000 (0:00:00.866) 0:11:27.380 ***** 2026-02-14 05:48:29.280718 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:48:29.280728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:48:29.280739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:48:29.280750 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280761 | orchestrator | 2026-02-14 05:48:29.280772 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-14 05:48:29.280782 | orchestrator | Saturday 14 February 2026 05:48:16 +0000 (0:00:01.053) 0:11:28.433 ***** 2026-02-14 05:48:29.280793 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-14 05:48:29.280804 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-14 05:48:29.280814 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-14 05:48:29.280825 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-14 05:48:29.280836 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-14 05:48:29.280847 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-14 05:48:29.280858 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.280868 | orchestrator | 2026-02-14 05:48:29.280879 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-14 05:48:29.280890 | orchestrator | Saturday 14 February 2026 05:48:17 +0000 (0:00:01.456) 0:11:29.890 ***** 2026-02-14 05:48:29.280901 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:48:29.280911 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 05:48:29.280922 | orchestrator | 2026-02-14 05:48:29.280933 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-14 05:48:29.280944 | orchestrator | Saturday 14 February 2026 05:48:20 +0000 (0:00:03.358) 0:11:33.249 ***** 2026-02-14 05:48:29.280954 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:48:29.280965 | orchestrator | 2026-02-14 05:48:29.280976 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:48:29.280994 | orchestrator | Saturday 14 February 2026 05:48:23 +0000 (0:00:02.142) 0:11:35.392 ***** 2026-02-14 05:48:29.281005 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-14 05:48:29.281016 | orchestrator | 2026-02-14 05:48:29.281027 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 05:48:29.281038 | orchestrator | Saturday 14 February 2026 05:48:24 +0000 (0:00:01.148) 0:11:36.540 ***** 2026-02-14 05:48:29.281048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-14 05:48:29.281059 | orchestrator | 2026-02-14 05:48:29.281070 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 05:48:29.281081 | orchestrator | Saturday 14 February 2026 05:48:25 +0000 (0:00:01.157) 0:11:37.698 ***** 2026-02-14 05:48:29.281092 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:48:29.281103 | orchestrator | 2026-02-14 05:48:29.281113 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 05:48:29.281124 | orchestrator | Saturday 14 February 2026 05:48:26 +0000 (0:00:01.578) 0:11:39.276 ***** 2026-02-14 05:48:29.281135 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.281146 | orchestrator | 2026-02-14 05:48:29.281185 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 05:48:29.281201 | orchestrator | Saturday 14 February 2026 05:48:28 +0000 (0:00:01.173) 0:11:40.450 ***** 2026-02-14 05:48:29.281212 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:48:29.281223 | orchestrator | 2026-02-14 05:48:29.281234 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 05:48:29.281254 | orchestrator | Saturday 14 February 2026 05:48:29 +0000 (0:00:01.142) 0:11:41.592 ***** 2026-02-14 05:49:13.852054 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852228 | orchestrator | 2026-02-14 05:49:13.852255 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 05:49:13.852268 | orchestrator | Saturday 14 February 2026 05:48:30 +0000 (0:00:01.344) 0:11:42.937 ***** 2026-02-14 05:49:13.852279 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.852292 | orchestrator | 2026-02-14 05:49:13.852303 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 05:49:13.852313 | orchestrator | Saturday 14 February 2026 05:48:32 +0000 (0:00:01.672) 0:11:44.610 ***** 2026-02-14 05:49:13.852324 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852335 | orchestrator | 2026-02-14 05:49:13.852346 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 05:49:13.852357 | orchestrator | Saturday 14 February 2026 05:48:33 +0000 (0:00:01.161) 0:11:45.771 ***** 2026-02-14 05:49:13.852368 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852379 | orchestrator | 2026-02-14 05:49:13.852390 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 05:49:13.852401 | orchestrator | Saturday 14 February 2026 05:48:34 +0000 (0:00:01.159) 0:11:46.930 ***** 2026-02-14 05:49:13.852411 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.852422 | orchestrator | 2026-02-14 05:49:13.852433 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 05:49:13.852444 | orchestrator | Saturday 14 February 2026 05:48:36 +0000 (0:00:01.544) 0:11:48.475 ***** 2026-02-14 05:49:13.852455 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.852465 | orchestrator | 2026-02-14 05:49:13.852476 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 05:49:13.852487 | orchestrator | Saturday 14 February 2026 05:48:37 +0000 (0:00:01.589) 0:11:50.065 ***** 2026-02-14 05:49:13.852498 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852517 | orchestrator | 2026-02-14 05:49:13.852535 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:49:13.852553 | orchestrator | Saturday 14 February 2026 05:48:38 +0000 (0:00:00.816) 0:11:50.882 ***** 2026-02-14 05:49:13.852573 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.852619 | orchestrator | 2026-02-14 05:49:13.852634 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:49:13.852647 | orchestrator | Saturday 14 February 2026 05:48:39 +0000 (0:00:00.894) 0:11:51.777 ***** 2026-02-14 05:49:13.852659 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852671 | orchestrator | 2026-02-14 05:49:13.852684 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:49:13.852697 | orchestrator | Saturday 14 February 2026 05:48:40 +0000 (0:00:00.802) 0:11:52.579 ***** 2026-02-14 05:49:13.852709 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852721 | orchestrator | 2026-02-14 05:49:13.852734 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:49:13.852745 | orchestrator | Saturday 14 February 2026 05:48:41 +0000 (0:00:00.791) 0:11:53.371 ***** 2026-02-14 05:49:13.852755 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852766 | orchestrator | 2026-02-14 05:49:13.852777 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:49:13.852788 | orchestrator | Saturday 14 February 2026 05:48:41 +0000 (0:00:00.798) 0:11:54.169 ***** 2026-02-14 05:49:13.852804 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852823 | orchestrator | 2026-02-14 05:49:13.852841 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:49:13.852859 | orchestrator | Saturday 14 February 2026 05:48:42 +0000 (0:00:00.791) 0:11:54.960 ***** 2026-02-14 05:49:13.852876 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.852893 | orchestrator | 2026-02-14 05:49:13.852911 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:49:13.852930 | orchestrator | Saturday 14 February 2026 05:48:43 +0000 (0:00:00.769) 0:11:55.730 ***** 2026-02-14 05:49:13.852948 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.852968 | orchestrator | 2026-02-14 05:49:13.852986 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:49:13.853003 | orchestrator | Saturday 14 February 2026 05:48:44 +0000 (0:00:00.817) 0:11:56.548 ***** 2026-02-14 05:49:13.853014 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.853025 | orchestrator | 2026-02-14 05:49:13.853035 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:49:13.853046 | orchestrator | Saturday 14 February 2026 05:48:45 +0000 (0:00:01.053) 0:11:57.602 ***** 2026-02-14 05:49:13.853057 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.853068 | orchestrator | 2026-02-14 05:49:13.853079 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:49:13.853090 | orchestrator | Saturday 14 February 2026 05:48:46 +0000 (0:00:00.866) 0:11:58.469 ***** 2026-02-14 05:49:13.853100 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853111 | orchestrator | 2026-02-14 05:49:13.853150 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:49:13.853165 | orchestrator | Saturday 14 February 2026 05:48:46 +0000 (0:00:00.790) 0:11:59.259 ***** 2026-02-14 05:49:13.853176 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853187 | orchestrator | 2026-02-14 05:49:13.853198 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:49:13.853208 | orchestrator | Saturday 14 February 2026 05:48:47 +0000 (0:00:00.799) 0:12:00.059 ***** 2026-02-14 05:49:13.853219 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853230 | orchestrator | 2026-02-14 05:49:13.853241 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:49:13.853251 | orchestrator | Saturday 14 February 2026 05:48:48 +0000 (0:00:00.787) 0:12:00.847 ***** 2026-02-14 05:49:13.853262 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853273 | orchestrator | 2026-02-14 05:49:13.853298 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:49:13.853310 | orchestrator | Saturday 14 February 2026 05:48:49 +0000 (0:00:00.819) 0:12:01.667 ***** 2026-02-14 05:49:13.853320 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853345 | orchestrator | 2026-02-14 05:49:13.853377 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:49:13.853389 | orchestrator | Saturday 14 February 2026 05:48:50 +0000 (0:00:00.829) 0:12:02.496 ***** 2026-02-14 05:49:13.853400 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853410 | orchestrator | 2026-02-14 05:49:13.853421 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:49:13.853432 | orchestrator | Saturday 14 February 2026 05:48:50 +0000 (0:00:00.808) 0:12:03.305 ***** 2026-02-14 05:49:13.853443 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853454 | orchestrator | 2026-02-14 05:49:13.853464 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:49:13.853476 | orchestrator | Saturday 14 February 2026 05:48:51 +0000 (0:00:00.834) 0:12:04.140 ***** 2026-02-14 05:49:13.853486 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853497 | orchestrator | 2026-02-14 05:49:13.853508 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:49:13.853518 | orchestrator | Saturday 14 February 2026 05:48:52 +0000 (0:00:00.811) 0:12:04.951 ***** 2026-02-14 05:49:13.853529 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853540 | orchestrator | 2026-02-14 05:49:13.853551 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:49:13.853561 | orchestrator | Saturday 14 February 2026 05:48:53 +0000 (0:00:00.812) 0:12:05.764 ***** 2026-02-14 05:49:13.853572 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853583 | orchestrator | 2026-02-14 05:49:13.853593 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:49:13.853604 | orchestrator | Saturday 14 February 2026 05:48:54 +0000 (0:00:00.802) 0:12:06.566 ***** 2026-02-14 05:49:13.853615 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853625 | orchestrator | 2026-02-14 05:49:13.853636 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:49:13.853647 | orchestrator | Saturday 14 February 2026 05:48:55 +0000 (0:00:00.975) 0:12:07.542 ***** 2026-02-14 05:49:13.853657 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853668 | orchestrator | 2026-02-14 05:49:13.853679 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:49:13.853689 | orchestrator | Saturday 14 February 2026 05:48:56 +0000 (0:00:00.870) 0:12:08.412 ***** 2026-02-14 05:49:13.853700 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.853711 | orchestrator | 2026-02-14 05:49:13.853722 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:49:13.853732 | orchestrator | Saturday 14 February 2026 05:48:57 +0000 (0:00:01.765) 0:12:10.178 ***** 2026-02-14 05:49:13.853743 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.853754 | orchestrator | 2026-02-14 05:49:13.853765 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:49:13.853775 | orchestrator | Saturday 14 February 2026 05:48:59 +0000 (0:00:02.064) 0:12:12.243 ***** 2026-02-14 05:49:13.853786 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-14 05:49:13.853798 | orchestrator | 2026-02-14 05:49:13.853809 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 05:49:13.853820 | orchestrator | Saturday 14 February 2026 05:49:01 +0000 (0:00:01.181) 0:12:13.425 ***** 2026-02-14 05:49:13.853830 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853841 | orchestrator | 2026-02-14 05:49:13.853852 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 05:49:13.853862 | orchestrator | Saturday 14 February 2026 05:49:02 +0000 (0:00:01.135) 0:12:14.560 ***** 2026-02-14 05:49:13.853873 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.853884 | orchestrator | 2026-02-14 05:49:13.853894 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 05:49:13.853905 | orchestrator | Saturday 14 February 2026 05:49:03 +0000 (0:00:01.213) 0:12:15.773 ***** 2026-02-14 05:49:13.853924 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 05:49:13.853935 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 05:49:13.853946 | orchestrator | 2026-02-14 05:49:13.853957 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 05:49:13.853967 | orchestrator | Saturday 14 February 2026 05:49:05 +0000 (0:00:01.954) 0:12:17.728 ***** 2026-02-14 05:49:13.853978 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.853989 | orchestrator | 2026-02-14 05:49:13.853999 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 05:49:13.854010 | orchestrator | Saturday 14 February 2026 05:49:06 +0000 (0:00:01.523) 0:12:19.252 ***** 2026-02-14 05:49:13.854080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.854091 | orchestrator | 2026-02-14 05:49:13.854102 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 05:49:13.854113 | orchestrator | Saturday 14 February 2026 05:49:08 +0000 (0:00:01.203) 0:12:20.455 ***** 2026-02-14 05:49:13.854146 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.854203 | orchestrator | 2026-02-14 05:49:13.854217 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:49:13.854228 | orchestrator | Saturday 14 February 2026 05:49:08 +0000 (0:00:00.800) 0:12:21.256 ***** 2026-02-14 05:49:13.854239 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:13.854249 | orchestrator | 2026-02-14 05:49:13.854260 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:49:13.854271 | orchestrator | Saturday 14 February 2026 05:49:09 +0000 (0:00:00.972) 0:12:22.229 ***** 2026-02-14 05:49:13.854281 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-14 05:49:13.854292 | orchestrator | 2026-02-14 05:49:13.854302 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 05:49:13.854319 | orchestrator | Saturday 14 February 2026 05:49:11 +0000 (0:00:01.128) 0:12:23.358 ***** 2026-02-14 05:49:13.854330 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:13.854341 | orchestrator | 2026-02-14 05:49:13.854352 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 05:49:13.854372 | orchestrator | Saturday 14 February 2026 05:49:13 +0000 (0:00:02.804) 0:12:26.162 ***** 2026-02-14 05:49:55.136362 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 05:49:55.136478 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 05:49:55.136493 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 05:49:55.136505 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136518 | orchestrator | 2026-02-14 05:49:55.136529 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 05:49:55.136540 | orchestrator | Saturday 14 February 2026 05:49:15 +0000 (0:00:01.194) 0:12:27.356 ***** 2026-02-14 05:49:55.136551 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136562 | orchestrator | 2026-02-14 05:49:55.136573 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 05:49:55.136585 | orchestrator | Saturday 14 February 2026 05:49:16 +0000 (0:00:01.151) 0:12:28.508 ***** 2026-02-14 05:49:55.136596 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136606 | orchestrator | 2026-02-14 05:49:55.136617 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 05:49:55.136628 | orchestrator | Saturday 14 February 2026 05:49:17 +0000 (0:00:01.153) 0:12:29.662 ***** 2026-02-14 05:49:55.136639 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136650 | orchestrator | 2026-02-14 05:49:55.136661 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 05:49:55.136672 | orchestrator | Saturday 14 February 2026 05:49:18 +0000 (0:00:01.182) 0:12:30.845 ***** 2026-02-14 05:49:55.136706 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136718 | orchestrator | 2026-02-14 05:49:55.136729 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 05:49:55.136740 | orchestrator | Saturday 14 February 2026 05:49:19 +0000 (0:00:01.180) 0:12:32.025 ***** 2026-02-14 05:49:55.136751 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136762 | orchestrator | 2026-02-14 05:49:55.136772 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:49:55.136783 | orchestrator | Saturday 14 February 2026 05:49:20 +0000 (0:00:00.933) 0:12:32.958 ***** 2026-02-14 05:49:55.136794 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:55.136806 | orchestrator | 2026-02-14 05:49:55.136816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:49:55.136827 | orchestrator | Saturday 14 February 2026 05:49:22 +0000 (0:00:02.298) 0:12:35.257 ***** 2026-02-14 05:49:55.136838 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:55.136849 | orchestrator | 2026-02-14 05:49:55.136860 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:49:55.136870 | orchestrator | Saturday 14 February 2026 05:49:23 +0000 (0:00:00.823) 0:12:36.080 ***** 2026-02-14 05:49:55.136881 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-14 05:49:55.136895 | orchestrator | 2026-02-14 05:49:55.136908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 05:49:55.136920 | orchestrator | Saturday 14 February 2026 05:49:25 +0000 (0:00:01.389) 0:12:37.469 ***** 2026-02-14 05:49:55.136932 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136944 | orchestrator | 2026-02-14 05:49:55.136957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 05:49:55.136969 | orchestrator | Saturday 14 February 2026 05:49:26 +0000 (0:00:01.195) 0:12:38.665 ***** 2026-02-14 05:49:55.136981 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.136994 | orchestrator | 2026-02-14 05:49:55.137006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 05:49:55.137018 | orchestrator | Saturday 14 February 2026 05:49:27 +0000 (0:00:01.226) 0:12:39.891 ***** 2026-02-14 05:49:55.137031 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137043 | orchestrator | 2026-02-14 05:49:55.137056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 05:49:55.137068 | orchestrator | Saturday 14 February 2026 05:49:28 +0000 (0:00:01.261) 0:12:41.153 ***** 2026-02-14 05:49:55.137080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137092 | orchestrator | 2026-02-14 05:49:55.137134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 05:49:55.137146 | orchestrator | Saturday 14 February 2026 05:49:29 +0000 (0:00:01.174) 0:12:42.328 ***** 2026-02-14 05:49:55.137159 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137172 | orchestrator | 2026-02-14 05:49:55.137184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 05:49:55.137196 | orchestrator | Saturday 14 February 2026 05:49:31 +0000 (0:00:01.202) 0:12:43.530 ***** 2026-02-14 05:49:55.137209 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137221 | orchestrator | 2026-02-14 05:49:55.137233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 05:49:55.137246 | orchestrator | Saturday 14 February 2026 05:49:32 +0000 (0:00:01.217) 0:12:44.748 ***** 2026-02-14 05:49:55.137259 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137271 | orchestrator | 2026-02-14 05:49:55.137283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 05:49:55.137294 | orchestrator | Saturday 14 February 2026 05:49:33 +0000 (0:00:01.225) 0:12:45.974 ***** 2026-02-14 05:49:55.137305 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137315 | orchestrator | 2026-02-14 05:49:55.137326 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 05:49:55.137337 | orchestrator | Saturday 14 February 2026 05:49:34 +0000 (0:00:01.229) 0:12:47.203 ***** 2026-02-14 05:49:55.137356 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:49:55.137367 | orchestrator | 2026-02-14 05:49:55.137393 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:49:55.137404 | orchestrator | Saturday 14 February 2026 05:49:35 +0000 (0:00:00.856) 0:12:48.059 ***** 2026-02-14 05:49:55.137415 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-14 05:49:55.137427 | orchestrator | 2026-02-14 05:49:55.137438 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 05:49:55.137467 | orchestrator | Saturday 14 February 2026 05:49:36 +0000 (0:00:01.146) 0:12:49.206 ***** 2026-02-14 05:49:55.137479 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-14 05:49:55.137491 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-14 05:49:55.137501 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-14 05:49:55.137512 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-14 05:49:55.137523 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-14 05:49:55.137534 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-14 05:49:55.137545 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-14 05:49:55.137556 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-14 05:49:55.137567 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 05:49:55.137578 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 05:49:55.137588 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 05:49:55.137599 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 05:49:55.137610 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 05:49:55.137621 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 05:49:55.137632 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-14 05:49:55.137643 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-14 05:49:55.137654 | orchestrator | 2026-02-14 05:49:55.137665 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:49:55.137676 | orchestrator | Saturday 14 February 2026 05:49:43 +0000 (0:00:06.716) 0:12:55.923 ***** 2026-02-14 05:49:55.137686 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137697 | orchestrator | 2026-02-14 05:49:55.137708 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:49:55.137719 | orchestrator | Saturday 14 February 2026 05:49:44 +0000 (0:00:00.785) 0:12:56.709 ***** 2026-02-14 05:49:55.137730 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137757 | orchestrator | 2026-02-14 05:49:55.137780 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:49:55.137792 | orchestrator | Saturday 14 February 2026 05:49:45 +0000 (0:00:00.772) 0:12:57.481 ***** 2026-02-14 05:49:55.137803 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137813 | orchestrator | 2026-02-14 05:49:55.137824 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:49:55.137835 | orchestrator | Saturday 14 February 2026 05:49:45 +0000 (0:00:00.835) 0:12:58.316 ***** 2026-02-14 05:49:55.137846 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137856 | orchestrator | 2026-02-14 05:49:55.137867 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:49:55.137878 | orchestrator | Saturday 14 February 2026 05:49:46 +0000 (0:00:00.782) 0:12:59.099 ***** 2026-02-14 05:49:55.137889 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137899 | orchestrator | 2026-02-14 05:49:55.137910 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:49:55.137921 | orchestrator | Saturday 14 February 2026 05:49:47 +0000 (0:00:00.789) 0:12:59.889 ***** 2026-02-14 05:49:55.137940 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137951 | orchestrator | 2026-02-14 05:49:55.137962 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:49:55.137973 | orchestrator | Saturday 14 February 2026 05:49:48 +0000 (0:00:00.840) 0:13:00.729 ***** 2026-02-14 05:49:55.137984 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.137995 | orchestrator | 2026-02-14 05:49:55.138006 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:49:55.138073 | orchestrator | Saturday 14 February 2026 05:49:49 +0000 (0:00:00.785) 0:13:01.515 ***** 2026-02-14 05:49:55.138086 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138118 | orchestrator | 2026-02-14 05:49:55.138130 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:49:55.138141 | orchestrator | Saturday 14 February 2026 05:49:50 +0000 (0:00:00.828) 0:13:02.343 ***** 2026-02-14 05:49:55.138152 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138162 | orchestrator | 2026-02-14 05:49:55.138173 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:49:55.138183 | orchestrator | Saturday 14 February 2026 05:49:50 +0000 (0:00:00.850) 0:13:03.194 ***** 2026-02-14 05:49:55.138194 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138205 | orchestrator | 2026-02-14 05:49:55.138215 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:49:55.138226 | orchestrator | Saturday 14 February 2026 05:49:51 +0000 (0:00:00.819) 0:13:04.014 ***** 2026-02-14 05:49:55.138236 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138247 | orchestrator | 2026-02-14 05:49:55.138257 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 05:49:55.138268 | orchestrator | Saturday 14 February 2026 05:49:52 +0000 (0:00:00.775) 0:13:04.790 ***** 2026-02-14 05:49:55.138279 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138289 | orchestrator | 2026-02-14 05:49:55.138300 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 05:49:55.138311 | orchestrator | Saturday 14 February 2026 05:49:53 +0000 (0:00:00.862) 0:13:05.652 ***** 2026-02-14 05:49:55.138321 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138332 | orchestrator | 2026-02-14 05:49:55.138349 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 05:49:55.138360 | orchestrator | Saturday 14 February 2026 05:49:54 +0000 (0:00:00.929) 0:13:06.582 ***** 2026-02-14 05:49:55.138371 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:49:55.138381 | orchestrator | 2026-02-14 05:49:55.138392 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 05:49:55.138410 | orchestrator | Saturday 14 February 2026 05:49:55 +0000 (0:00:00.863) 0:13:07.445 ***** 2026-02-14 05:50:44.061851 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.061961 | orchestrator | 2026-02-14 05:50:44.061975 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 05:50:44.061987 | orchestrator | Saturday 14 February 2026 05:49:56 +0000 (0:00:00.905) 0:13:08.351 ***** 2026-02-14 05:50:44.061997 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062007 | orchestrator | 2026-02-14 05:50:44.062108 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 05:50:44.062120 | orchestrator | Saturday 14 February 2026 05:49:56 +0000 (0:00:00.766) 0:13:09.118 ***** 2026-02-14 05:50:44.062130 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062169 | orchestrator | 2026-02-14 05:50:44.062181 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:50:44.062192 | orchestrator | Saturday 14 February 2026 05:49:57 +0000 (0:00:00.830) 0:13:09.948 ***** 2026-02-14 05:50:44.062202 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062212 | orchestrator | 2026-02-14 05:50:44.062222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:50:44.062253 | orchestrator | Saturday 14 February 2026 05:49:58 +0000 (0:00:00.814) 0:13:10.763 ***** 2026-02-14 05:50:44.062264 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062273 | orchestrator | 2026-02-14 05:50:44.062283 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:50:44.062292 | orchestrator | Saturday 14 February 2026 05:49:59 +0000 (0:00:00.788) 0:13:11.551 ***** 2026-02-14 05:50:44.062302 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062312 | orchestrator | 2026-02-14 05:50:44.062321 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:50:44.062331 | orchestrator | Saturday 14 February 2026 05:50:00 +0000 (0:00:00.818) 0:13:12.369 ***** 2026-02-14 05:50:44.062341 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062351 | orchestrator | 2026-02-14 05:50:44.062360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:50:44.062370 | orchestrator | Saturday 14 February 2026 05:50:00 +0000 (0:00:00.771) 0:13:13.141 ***** 2026-02-14 05:50:44.062379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:50:44.062389 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:50:44.062399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:50:44.062408 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062418 | orchestrator | 2026-02-14 05:50:44.062427 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:50:44.062437 | orchestrator | Saturday 14 February 2026 05:50:01 +0000 (0:00:01.023) 0:13:14.164 ***** 2026-02-14 05:50:44.062446 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:50:44.062456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:50:44.062465 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:50:44.062475 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062484 | orchestrator | 2026-02-14 05:50:44.062494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:50:44.062503 | orchestrator | Saturday 14 February 2026 05:50:02 +0000 (0:00:01.111) 0:13:15.275 ***** 2026-02-14 05:50:44.062513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:50:44.062522 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:50:44.062531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:50:44.062541 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062550 | orchestrator | 2026-02-14 05:50:44.062560 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:50:44.062569 | orchestrator | Saturday 14 February 2026 05:50:04 +0000 (0:00:01.099) 0:13:16.375 ***** 2026-02-14 05:50:44.062579 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062588 | orchestrator | 2026-02-14 05:50:44.062598 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:50:44.062607 | orchestrator | Saturday 14 February 2026 05:50:04 +0000 (0:00:00.781) 0:13:17.157 ***** 2026-02-14 05:50:44.062618 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-14 05:50:44.062627 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062638 | orchestrator | 2026-02-14 05:50:44.062647 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 05:50:44.062657 | orchestrator | Saturday 14 February 2026 05:50:05 +0000 (0:00:01.149) 0:13:18.306 ***** 2026-02-14 05:50:44.062667 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.062676 | orchestrator | 2026-02-14 05:50:44.062686 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-14 05:50:44.062695 | orchestrator | Saturday 14 February 2026 05:50:07 +0000 (0:00:01.466) 0:13:19.773 ***** 2026-02-14 05:50:44.062705 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.062714 | orchestrator | 2026-02-14 05:50:44.062724 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-14 05:50:44.062746 | orchestrator | Saturday 14 February 2026 05:50:08 +0000 (0:00:00.833) 0:13:20.606 ***** 2026-02-14 05:50:44.062764 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-14 05:50:44.062781 | orchestrator | 2026-02-14 05:50:44.062799 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-14 05:50:44.062816 | orchestrator | Saturday 14 February 2026 05:50:09 +0000 (0:00:01.193) 0:13:21.800 ***** 2026-02-14 05:50:44.062851 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-14 05:50:44.062868 | orchestrator | 2026-02-14 05:50:44.062884 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-14 05:50:44.062902 | orchestrator | Saturday 14 February 2026 05:50:12 +0000 (0:00:03.240) 0:13:25.040 ***** 2026-02-14 05:50:44.062919 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.062936 | orchestrator | 2026-02-14 05:50:44.062954 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-14 05:50:44.062996 | orchestrator | Saturday 14 February 2026 05:50:13 +0000 (0:00:01.219) 0:13:26.259 ***** 2026-02-14 05:50:44.063014 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063030 | orchestrator | 2026-02-14 05:50:44.063049 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-14 05:50:44.063068 | orchestrator | Saturday 14 February 2026 05:50:15 +0000 (0:00:01.173) 0:13:27.432 ***** 2026-02-14 05:50:44.063112 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063125 | orchestrator | 2026-02-14 05:50:44.063134 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-14 05:50:44.063144 | orchestrator | Saturday 14 February 2026 05:50:16 +0000 (0:00:01.158) 0:13:28.591 ***** 2026-02-14 05:50:44.063154 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:50:44.063163 | orchestrator | 2026-02-14 05:50:44.063173 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-14 05:50:44.063182 | orchestrator | Saturday 14 February 2026 05:50:18 +0000 (0:00:02.050) 0:13:30.642 ***** 2026-02-14 05:50:44.063192 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063201 | orchestrator | 2026-02-14 05:50:44.063211 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-14 05:50:44.063220 | orchestrator | Saturday 14 February 2026 05:50:19 +0000 (0:00:01.638) 0:13:32.281 ***** 2026-02-14 05:50:44.063230 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063240 | orchestrator | 2026-02-14 05:50:44.063249 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-14 05:50:44.063259 | orchestrator | Saturday 14 February 2026 05:50:21 +0000 (0:00:01.506) 0:13:33.787 ***** 2026-02-14 05:50:44.063268 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063278 | orchestrator | 2026-02-14 05:50:44.063288 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-14 05:50:44.063297 | orchestrator | Saturday 14 February 2026 05:50:23 +0000 (0:00:01.657) 0:13:35.444 ***** 2026-02-14 05:50:44.063307 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:50:44.063316 | orchestrator | 2026-02-14 05:50:44.063326 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-14 05:50:44.063335 | orchestrator | Saturday 14 February 2026 05:50:24 +0000 (0:00:01.677) 0:13:37.122 ***** 2026-02-14 05:50:44.063345 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:50:44.063355 | orchestrator | 2026-02-14 05:50:44.063364 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-14 05:50:44.063374 | orchestrator | Saturday 14 February 2026 05:50:26 +0000 (0:00:01.594) 0:13:38.716 ***** 2026-02-14 05:50:44.063383 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 05:50:44.063393 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-14 05:50:44.063403 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 05:50:44.063412 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-14 05:50:44.063431 | orchestrator | 2026-02-14 05:50:44.063441 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-14 05:50:44.063451 | orchestrator | Saturday 14 February 2026 05:50:30 +0000 (0:00:04.055) 0:13:42.772 ***** 2026-02-14 05:50:44.063460 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:50:44.063470 | orchestrator | 2026-02-14 05:50:44.063480 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-14 05:50:44.063489 | orchestrator | Saturday 14 February 2026 05:50:32 +0000 (0:00:02.077) 0:13:44.850 ***** 2026-02-14 05:50:44.063499 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063509 | orchestrator | 2026-02-14 05:50:44.063518 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-14 05:50:44.063528 | orchestrator | Saturday 14 February 2026 05:50:33 +0000 (0:00:01.221) 0:13:46.072 ***** 2026-02-14 05:50:44.063537 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063547 | orchestrator | 2026-02-14 05:50:44.063557 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-14 05:50:44.063566 | orchestrator | Saturday 14 February 2026 05:50:34 +0000 (0:00:01.174) 0:13:47.247 ***** 2026-02-14 05:50:44.063576 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063586 | orchestrator | 2026-02-14 05:50:44.063596 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-14 05:50:44.063605 | orchestrator | Saturday 14 February 2026 05:50:36 +0000 (0:00:01.798) 0:13:49.045 ***** 2026-02-14 05:50:44.063615 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:50:44.063624 | orchestrator | 2026-02-14 05:50:44.063634 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-14 05:50:44.063643 | orchestrator | Saturday 14 February 2026 05:50:38 +0000 (0:00:01.647) 0:13:50.694 ***** 2026-02-14 05:50:44.063653 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.063663 | orchestrator | 2026-02-14 05:50:44.063672 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-14 05:50:44.063682 | orchestrator | Saturday 14 February 2026 05:50:39 +0000 (0:00:00.843) 0:13:51.538 ***** 2026-02-14 05:50:44.063692 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-14 05:50:44.063702 | orchestrator | 2026-02-14 05:50:44.063711 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-14 05:50:44.063721 | orchestrator | Saturday 14 February 2026 05:50:40 +0000 (0:00:01.157) 0:13:52.696 ***** 2026-02-14 05:50:44.063731 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.063740 | orchestrator | 2026-02-14 05:50:44.063750 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-14 05:50:44.063766 | orchestrator | Saturday 14 February 2026 05:50:41 +0000 (0:00:01.123) 0:13:53.820 ***** 2026-02-14 05:50:44.063776 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:50:44.063786 | orchestrator | 2026-02-14 05:50:44.063795 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-14 05:50:44.063805 | orchestrator | Saturday 14 February 2026 05:50:42 +0000 (0:00:01.292) 0:13:55.112 ***** 2026-02-14 05:50:44.063815 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-14 05:50:44.063824 | orchestrator | 2026-02-14 05:50:44.063842 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-14 05:51:53.659383 | orchestrator | Saturday 14 February 2026 05:50:44 +0000 (0:00:01.255) 0:13:56.368 ***** 2026-02-14 05:51:53.659503 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.659520 | orchestrator | 2026-02-14 05:51:53.659533 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-14 05:51:53.659545 | orchestrator | Saturday 14 February 2026 05:50:46 +0000 (0:00:02.460) 0:13:58.828 ***** 2026-02-14 05:51:53.659556 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.659567 | orchestrator | 2026-02-14 05:51:53.659578 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-14 05:51:53.659589 | orchestrator | Saturday 14 February 2026 05:50:48 +0000 (0:00:01.963) 0:14:00.792 ***** 2026-02-14 05:51:53.659624 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.659635 | orchestrator | 2026-02-14 05:51:53.659646 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-14 05:51:53.659657 | orchestrator | Saturday 14 February 2026 05:50:51 +0000 (0:00:02.611) 0:14:03.403 ***** 2026-02-14 05:51:53.659668 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:51:53.659679 | orchestrator | 2026-02-14 05:51:53.659690 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-14 05:51:53.659700 | orchestrator | Saturday 14 February 2026 05:50:54 +0000 (0:00:02.945) 0:14:06.348 ***** 2026-02-14 05:51:53.659711 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-14 05:51:53.659722 | orchestrator | 2026-02-14 05:51:53.659733 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-14 05:51:53.659744 | orchestrator | Saturday 14 February 2026 05:50:55 +0000 (0:00:01.192) 0:14:07.541 ***** 2026-02-14 05:51:53.659754 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-14 05:51:53.659765 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.659776 | orchestrator | 2026-02-14 05:51:53.659787 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-14 05:51:53.659798 | orchestrator | Saturday 14 February 2026 05:51:18 +0000 (0:00:23.013) 0:14:30.555 ***** 2026-02-14 05:51:53.659808 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.659819 | orchestrator | 2026-02-14 05:51:53.659830 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-14 05:51:53.659840 | orchestrator | Saturday 14 February 2026 05:51:21 +0000 (0:00:02.799) 0:14:33.355 ***** 2026-02-14 05:51:53.659851 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:51:53.659862 | orchestrator | 2026-02-14 05:51:53.659872 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-14 05:51:53.659883 | orchestrator | Saturday 14 February 2026 05:51:21 +0000 (0:00:00.802) 0:14:34.157 ***** 2026-02-14 05:51:53.659896 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:51:53.659910 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:51:53.659924 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-14 05:51:53.659937 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-14 05:51:53.659950 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-14 05:51:53.660004 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}])  2026-02-14 05:51:53.660021 | orchestrator | 2026-02-14 05:51:53.660034 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-14 05:51:53.660074 | orchestrator | Saturday 14 February 2026 05:51:31 +0000 (0:00:10.039) 0:14:44.197 ***** 2026-02-14 05:51:53.660086 | orchestrator | changed: [testbed-node-1] 2026-02-14 05:51:53.660099 | orchestrator | 2026-02-14 05:51:53.660111 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:51:53.660123 | orchestrator | Saturday 14 February 2026 05:51:33 +0000 (0:00:02.125) 0:14:46.323 ***** 2026-02-14 05:51:53.660135 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:51:53.660148 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-14 05:51:53.660159 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-14 05:51:53.660172 | orchestrator | 2026-02-14 05:51:53.660184 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:51:53.660197 | orchestrator | Saturday 14 February 2026 05:51:35 +0000 (0:00:01.666) 0:14:47.989 ***** 2026-02-14 05:51:53.660208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:51:53.660219 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:51:53.660230 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:51:53.660240 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:51:53.660251 | orchestrator | 2026-02-14 05:51:53.660261 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-14 05:51:53.660272 | orchestrator | Saturday 14 February 2026 05:51:36 +0000 (0:00:01.078) 0:14:49.068 ***** 2026-02-14 05:51:53.660283 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:51:53.660293 | orchestrator | 2026-02-14 05:51:53.660304 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-14 05:51:53.660315 | orchestrator | Saturday 14 February 2026 05:51:37 +0000 (0:00:00.786) 0:14:49.855 ***** 2026-02-14 05:51:53.660325 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:51:53.660336 | orchestrator | 2026-02-14 05:51:53.660346 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-14 05:51:53.660358 | orchestrator | 2026-02-14 05:51:53.660368 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-14 05:51:53.660379 | orchestrator | Saturday 14 February 2026 05:51:39 +0000 (0:00:02.243) 0:14:52.098 ***** 2026-02-14 05:51:53.660390 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660400 | orchestrator | 2026-02-14 05:51:53.660411 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-14 05:51:53.660422 | orchestrator | Saturday 14 February 2026 05:51:40 +0000 (0:00:01.189) 0:14:53.288 ***** 2026-02-14 05:51:53.660432 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660443 | orchestrator | 2026-02-14 05:51:53.660453 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-14 05:51:53.660464 | orchestrator | Saturday 14 February 2026 05:51:41 +0000 (0:00:00.883) 0:14:54.171 ***** 2026-02-14 05:51:53.660475 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:51:53.660485 | orchestrator | 2026-02-14 05:51:53.660496 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-14 05:51:53.660507 | orchestrator | Saturday 14 February 2026 05:51:42 +0000 (0:00:00.787) 0:14:54.959 ***** 2026-02-14 05:51:53.660517 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660528 | orchestrator | 2026-02-14 05:51:53.660538 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:51:53.660559 | orchestrator | Saturday 14 February 2026 05:51:43 +0000 (0:00:00.808) 0:14:55.768 ***** 2026-02-14 05:51:53.660570 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-14 05:51:53.660580 | orchestrator | 2026-02-14 05:51:53.660591 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 05:51:53.660602 | orchestrator | Saturday 14 February 2026 05:51:44 +0000 (0:00:01.404) 0:14:57.172 ***** 2026-02-14 05:51:53.660612 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660623 | orchestrator | 2026-02-14 05:51:53.660633 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 05:51:53.660644 | orchestrator | Saturday 14 February 2026 05:51:46 +0000 (0:00:01.468) 0:14:58.641 ***** 2026-02-14 05:51:53.660655 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660666 | orchestrator | 2026-02-14 05:51:53.660676 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 05:51:53.660687 | orchestrator | Saturday 14 February 2026 05:51:47 +0000 (0:00:01.189) 0:14:59.831 ***** 2026-02-14 05:51:53.660697 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660708 | orchestrator | 2026-02-14 05:51:53.660719 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 05:51:53.660730 | orchestrator | Saturday 14 February 2026 05:51:48 +0000 (0:00:01.451) 0:15:01.282 ***** 2026-02-14 05:51:53.660740 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660751 | orchestrator | 2026-02-14 05:51:53.660762 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 05:51:53.660772 | orchestrator | Saturday 14 February 2026 05:51:50 +0000 (0:00:01.153) 0:15:02.436 ***** 2026-02-14 05:51:53.660783 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660794 | orchestrator | 2026-02-14 05:51:53.660804 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 05:51:53.660815 | orchestrator | Saturday 14 February 2026 05:51:51 +0000 (0:00:01.177) 0:15:03.613 ***** 2026-02-14 05:51:53.660901 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:51:53.660916 | orchestrator | 2026-02-14 05:51:53.660927 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 05:51:53.660938 | orchestrator | Saturday 14 February 2026 05:51:52 +0000 (0:00:01.205) 0:15:04.818 ***** 2026-02-14 05:51:53.660948 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:51:53.660959 | orchestrator | 2026-02-14 05:51:53.660970 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 05:51:53.660988 | orchestrator | Saturday 14 February 2026 05:51:53 +0000 (0:00:01.150) 0:15:05.969 ***** 2026-02-14 05:52:20.185914 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:20.186181 | orchestrator | 2026-02-14 05:52:20.186212 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 05:52:20.186236 | orchestrator | Saturday 14 February 2026 05:51:54 +0000 (0:00:01.216) 0:15:07.186 ***** 2026-02-14 05:52:20.186256 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:52:20.186276 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:52:20.186295 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:52:20.186315 | orchestrator | 2026-02-14 05:52:20.186335 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 05:52:20.186354 | orchestrator | Saturday 14 February 2026 05:51:57 +0000 (0:00:02.150) 0:15:09.336 ***** 2026-02-14 05:52:20.186372 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:20.186390 | orchestrator | 2026-02-14 05:52:20.186409 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 05:52:20.186429 | orchestrator | Saturday 14 February 2026 05:51:58 +0000 (0:00:01.321) 0:15:10.658 ***** 2026-02-14 05:52:20.186449 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:52:20.186468 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:52:20.186525 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:52:20.186546 | orchestrator | 2026-02-14 05:52:20.186565 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 05:52:20.186577 | orchestrator | Saturday 14 February 2026 05:52:01 +0000 (0:00:03.291) 0:15:13.950 ***** 2026-02-14 05:52:20.186590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:52:20.186603 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:52:20.186613 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:52:20.186624 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.186635 | orchestrator | 2026-02-14 05:52:20.186646 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 05:52:20.186657 | orchestrator | Saturday 14 February 2026 05:52:03 +0000 (0:00:01.958) 0:15:15.909 ***** 2026-02-14 05:52:20.186670 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186697 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186708 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.186719 | orchestrator | 2026-02-14 05:52:20.186730 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 05:52:20.186742 | orchestrator | Saturday 14 February 2026 05:52:05 +0000 (0:00:02.161) 0:15:18.071 ***** 2026-02-14 05:52:20.186755 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186769 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186795 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:20.186807 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.186818 | orchestrator | 2026-02-14 05:52:20.186829 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 05:52:20.186839 | orchestrator | Saturday 14 February 2026 05:52:06 +0000 (0:00:01.205) 0:15:19.276 ***** 2026-02-14 05:52:20.186871 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 05:51:58.856432', 'end': '2026-02-14 05:51:58.907517', 'delta': '0:00:00.051085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 05:52:20.186896 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 05:51:59.813468', 'end': '2026-02-14 05:51:59.854604', 'delta': '0:00:00.041136', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 05:52:20.186907 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '7aff8e7c54ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 05:52:00.436068', 'end': '2026-02-14 05:52:00.485291', 'delta': '0:00:00.049223', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7aff8e7c54ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 05:52:20.186917 | orchestrator | 2026-02-14 05:52:20.186927 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 05:52:20.186937 | orchestrator | Saturday 14 February 2026 05:52:08 +0000 (0:00:01.242) 0:15:20.519 ***** 2026-02-14 05:52:20.186947 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:20.186956 | orchestrator | 2026-02-14 05:52:20.186966 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 05:52:20.186976 | orchestrator | Saturday 14 February 2026 05:52:09 +0000 (0:00:01.323) 0:15:21.842 ***** 2026-02-14 05:52:20.186985 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.186995 | orchestrator | 2026-02-14 05:52:20.187004 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 05:52:20.187014 | orchestrator | Saturday 14 February 2026 05:52:10 +0000 (0:00:01.322) 0:15:23.165 ***** 2026-02-14 05:52:20.187023 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:20.187068 | orchestrator | 2026-02-14 05:52:20.187078 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 05:52:20.187088 | orchestrator | Saturday 14 February 2026 05:52:12 +0000 (0:00:01.215) 0:15:24.381 ***** 2026-02-14 05:52:20.187097 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-02-14 05:52:20.187107 | orchestrator | 2026-02-14 05:52:20.187116 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:52:20.187125 | orchestrator | Saturday 14 February 2026 05:52:14 +0000 (0:00:02.037) 0:15:26.418 ***** 2026-02-14 05:52:20.187135 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:20.187145 | orchestrator | 2026-02-14 05:52:20.187154 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 05:52:20.187164 | orchestrator | Saturday 14 February 2026 05:52:15 +0000 (0:00:01.260) 0:15:27.678 ***** 2026-02-14 05:52:20.187173 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.187183 | orchestrator | 2026-02-14 05:52:20.187192 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 05:52:20.187209 | orchestrator | Saturday 14 February 2026 05:52:16 +0000 (0:00:01.173) 0:15:28.852 ***** 2026-02-14 05:52:20.187218 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.187228 | orchestrator | 2026-02-14 05:52:20.187237 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 05:52:20.187252 | orchestrator | Saturday 14 February 2026 05:52:17 +0000 (0:00:01.302) 0:15:30.155 ***** 2026-02-14 05:52:20.187262 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.187271 | orchestrator | 2026-02-14 05:52:20.187281 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 05:52:20.187290 | orchestrator | Saturday 14 February 2026 05:52:19 +0000 (0:00:01.179) 0:15:31.334 ***** 2026-02-14 05:52:20.187300 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:20.187309 | orchestrator | 2026-02-14 05:52:20.187319 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 05:52:20.187335 | orchestrator | Saturday 14 February 2026 05:52:20 +0000 (0:00:01.165) 0:15:32.499 ***** 2026-02-14 05:52:28.855942 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856098 | orchestrator | 2026-02-14 05:52:28.856117 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 05:52:28.856131 | orchestrator | Saturday 14 February 2026 05:52:21 +0000 (0:00:01.128) 0:15:33.629 ***** 2026-02-14 05:52:28.856142 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856153 | orchestrator | 2026-02-14 05:52:28.856165 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 05:52:28.856175 | orchestrator | Saturday 14 February 2026 05:52:22 +0000 (0:00:01.250) 0:15:34.879 ***** 2026-02-14 05:52:28.856186 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856197 | orchestrator | 2026-02-14 05:52:28.856208 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 05:52:28.856219 | orchestrator | Saturday 14 February 2026 05:52:23 +0000 (0:00:01.277) 0:15:36.157 ***** 2026-02-14 05:52:28.856229 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856240 | orchestrator | 2026-02-14 05:52:28.856251 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 05:52:28.856262 | orchestrator | Saturday 14 February 2026 05:52:25 +0000 (0:00:01.187) 0:15:37.345 ***** 2026-02-14 05:52:28.856273 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856284 | orchestrator | 2026-02-14 05:52:28.856294 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 05:52:28.856305 | orchestrator | Saturday 14 February 2026 05:52:26 +0000 (0:00:01.181) 0:15:38.526 ***** 2026-02-14 05:52:28.856318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 05:52:28.856393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 05:52:28.856482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 05:52:28.856514 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:28.856525 | orchestrator | 2026-02-14 05:52:28.856536 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 05:52:28.856547 | orchestrator | Saturday 14 February 2026 05:52:27 +0000 (0:00:01.302) 0:15:39.829 ***** 2026-02-14 05:52:28.856565 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:28.856587 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882218 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882335 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882352 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882387 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882399 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882451 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882477 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 05:52:36.882502 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:36.882515 | orchestrator | 2026-02-14 05:52:36.882526 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 05:52:36.882539 | orchestrator | Saturday 14 February 2026 05:52:28 +0000 (0:00:01.346) 0:15:41.176 ***** 2026-02-14 05:52:36.882550 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:36.882562 | orchestrator | 2026-02-14 05:52:36.882573 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 05:52:36.882584 | orchestrator | Saturday 14 February 2026 05:52:30 +0000 (0:00:01.610) 0:15:42.786 ***** 2026-02-14 05:52:36.882594 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:36.882605 | orchestrator | 2026-02-14 05:52:36.882616 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:52:36.882632 | orchestrator | Saturday 14 February 2026 05:52:31 +0000 (0:00:01.154) 0:15:43.941 ***** 2026-02-14 05:52:36.882643 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:52:36.882654 | orchestrator | 2026-02-14 05:52:36.882665 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:52:36.882676 | orchestrator | Saturday 14 February 2026 05:52:33 +0000 (0:00:01.614) 0:15:45.555 ***** 2026-02-14 05:52:36.882689 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:36.882701 | orchestrator | 2026-02-14 05:52:36.882714 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 05:52:36.882726 | orchestrator | Saturday 14 February 2026 05:52:34 +0000 (0:00:01.198) 0:15:46.753 ***** 2026-02-14 05:52:36.882739 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:36.882751 | orchestrator | 2026-02-14 05:52:36.882763 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 05:52:36.882776 | orchestrator | Saturday 14 February 2026 05:52:35 +0000 (0:00:01.300) 0:15:48.053 ***** 2026-02-14 05:52:36.882789 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:52:36.882801 | orchestrator | 2026-02-14 05:52:36.882813 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:52:36.882833 | orchestrator | Saturday 14 February 2026 05:52:36 +0000 (0:00:01.144) 0:15:49.198 ***** 2026-02-14 05:53:16.897147 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-14 05:53:16.897270 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-14 05:53:16.897295 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:53:16.897316 | orchestrator | 2026-02-14 05:53:16.897338 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:53:16.897360 | orchestrator | Saturday 14 February 2026 05:52:38 +0000 (0:00:02.086) 0:15:51.285 ***** 2026-02-14 05:53:16.897382 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:53:16.897426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:53:16.897439 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:53:16.897450 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897460 | orchestrator | 2026-02-14 05:53:16.897471 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 05:53:16.897482 | orchestrator | Saturday 14 February 2026 05:52:40 +0000 (0:00:01.148) 0:15:52.434 ***** 2026-02-14 05:53:16.897493 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897504 | orchestrator | 2026-02-14 05:53:16.897516 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 05:53:16.897526 | orchestrator | Saturday 14 February 2026 05:52:41 +0000 (0:00:01.340) 0:15:53.774 ***** 2026-02-14 05:53:16.897537 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:53:16.897549 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:53:16.897560 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:53:16.897570 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:53:16.897581 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:53:16.897592 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:53:16.897603 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:53:16.897613 | orchestrator | 2026-02-14 05:53:16.897626 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 05:53:16.897644 | orchestrator | Saturday 14 February 2026 05:52:43 +0000 (0:00:01.850) 0:15:55.625 ***** 2026-02-14 05:53:16.897663 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:53:16.897682 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 05:53:16.897701 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:53:16.897721 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 05:53:16.897741 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 05:53:16.897761 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 05:53:16.897781 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 05:53:16.897794 | orchestrator | 2026-02-14 05:53:16.897806 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-14 05:53:16.897818 | orchestrator | Saturday 14 February 2026 05:52:45 +0000 (0:00:02.234) 0:15:57.860 ***** 2026-02-14 05:53:16.897831 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897843 | orchestrator | 2026-02-14 05:53:16.897855 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-14 05:53:16.897868 | orchestrator | Saturday 14 February 2026 05:52:46 +0000 (0:00:00.900) 0:15:58.760 ***** 2026-02-14 05:53:16.897880 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897892 | orchestrator | 2026-02-14 05:53:16.897905 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-14 05:53:16.897917 | orchestrator | Saturday 14 February 2026 05:52:47 +0000 (0:00:00.915) 0:15:59.675 ***** 2026-02-14 05:53:16.897929 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897942 | orchestrator | 2026-02-14 05:53:16.897955 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-14 05:53:16.897968 | orchestrator | Saturday 14 February 2026 05:52:48 +0000 (0:00:00.831) 0:16:00.507 ***** 2026-02-14 05:53:16.897980 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.897991 | orchestrator | 2026-02-14 05:53:16.898002 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-14 05:53:16.898154 | orchestrator | Saturday 14 February 2026 05:52:49 +0000 (0:00:00.879) 0:16:01.387 ***** 2026-02-14 05:53:16.898222 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898235 | orchestrator | 2026-02-14 05:53:16.898246 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-14 05:53:16.898256 | orchestrator | Saturday 14 February 2026 05:52:49 +0000 (0:00:00.767) 0:16:02.154 ***** 2026-02-14 05:53:16.898267 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:53:16.898278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:53:16.898288 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:53:16.898299 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898310 | orchestrator | 2026-02-14 05:53:16.898320 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-14 05:53:16.898331 | orchestrator | Saturday 14 February 2026 05:52:50 +0000 (0:00:01.054) 0:16:03.209 ***** 2026-02-14 05:53:16.898342 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-14 05:53:16.898353 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-14 05:53:16.898386 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-14 05:53:16.898406 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-14 05:53:16.898424 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-14 05:53:16.898441 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-14 05:53:16.898460 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898479 | orchestrator | 2026-02-14 05:53:16.898499 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-14 05:53:16.898517 | orchestrator | Saturday 14 February 2026 05:52:52 +0000 (0:00:01.667) 0:16:04.876 ***** 2026-02-14 05:53:16.898536 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:53:16.898549 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 05:53:16.898560 | orchestrator | 2026-02-14 05:53:16.898571 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-14 05:53:16.898581 | orchestrator | Saturday 14 February 2026 05:52:55 +0000 (0:00:03.061) 0:16:07.938 ***** 2026-02-14 05:53:16.898592 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:53:16.898603 | orchestrator | 2026-02-14 05:53:16.898614 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:53:16.898624 | orchestrator | Saturday 14 February 2026 05:52:57 +0000 (0:00:02.144) 0:16:10.083 ***** 2026-02-14 05:53:16.898635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-14 05:53:16.898647 | orchestrator | 2026-02-14 05:53:16.898658 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 05:53:16.898668 | orchestrator | Saturday 14 February 2026 05:52:59 +0000 (0:00:01.535) 0:16:11.618 ***** 2026-02-14 05:53:16.898679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-14 05:53:16.898690 | orchestrator | 2026-02-14 05:53:16.898700 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 05:53:16.898711 | orchestrator | Saturday 14 February 2026 05:53:00 +0000 (0:00:01.209) 0:16:12.828 ***** 2026-02-14 05:53:16.898722 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:16.898733 | orchestrator | 2026-02-14 05:53:16.898743 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 05:53:16.898754 | orchestrator | Saturday 14 February 2026 05:53:02 +0000 (0:00:01.578) 0:16:14.407 ***** 2026-02-14 05:53:16.898769 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898787 | orchestrator | 2026-02-14 05:53:16.898806 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 05:53:16.898837 | orchestrator | Saturday 14 February 2026 05:53:03 +0000 (0:00:01.185) 0:16:15.592 ***** 2026-02-14 05:53:16.898858 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898876 | orchestrator | 2026-02-14 05:53:16.898896 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 05:53:16.898907 | orchestrator | Saturday 14 February 2026 05:53:04 +0000 (0:00:01.153) 0:16:16.746 ***** 2026-02-14 05:53:16.898918 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.898929 | orchestrator | 2026-02-14 05:53:16.898939 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 05:53:16.898950 | orchestrator | Saturday 14 February 2026 05:53:05 +0000 (0:00:01.316) 0:16:18.063 ***** 2026-02-14 05:53:16.898961 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:16.898972 | orchestrator | 2026-02-14 05:53:16.898982 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 05:53:16.898993 | orchestrator | Saturday 14 February 2026 05:53:07 +0000 (0:00:01.569) 0:16:19.632 ***** 2026-02-14 05:53:16.899004 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.899041 | orchestrator | 2026-02-14 05:53:16.899052 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 05:53:16.899063 | orchestrator | Saturday 14 February 2026 05:53:08 +0000 (0:00:01.163) 0:16:20.796 ***** 2026-02-14 05:53:16.899074 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.899085 | orchestrator | 2026-02-14 05:53:16.899096 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 05:53:16.899106 | orchestrator | Saturday 14 February 2026 05:53:09 +0000 (0:00:01.131) 0:16:21.928 ***** 2026-02-14 05:53:16.899117 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:16.899133 | orchestrator | 2026-02-14 05:53:16.899151 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 05:53:16.899169 | orchestrator | Saturday 14 February 2026 05:53:11 +0000 (0:00:01.589) 0:16:23.517 ***** 2026-02-14 05:53:16.899187 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:16.899206 | orchestrator | 2026-02-14 05:53:16.899225 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 05:53:16.899253 | orchestrator | Saturday 14 February 2026 05:53:12 +0000 (0:00:01.526) 0:16:25.044 ***** 2026-02-14 05:53:16.899272 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.899284 | orchestrator | 2026-02-14 05:53:16.899295 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:53:16.899306 | orchestrator | Saturday 14 February 2026 05:53:13 +0000 (0:00:00.906) 0:16:25.950 ***** 2026-02-14 05:53:16.899317 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:16.899327 | orchestrator | 2026-02-14 05:53:16.899338 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:53:16.899349 | orchestrator | Saturday 14 February 2026 05:53:14 +0000 (0:00:00.843) 0:16:26.794 ***** 2026-02-14 05:53:16.899359 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.899370 | orchestrator | 2026-02-14 05:53:16.899381 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:53:16.899392 | orchestrator | Saturday 14 February 2026 05:53:15 +0000 (0:00:00.783) 0:16:27.577 ***** 2026-02-14 05:53:16.899403 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:16.899413 | orchestrator | 2026-02-14 05:53:16.899424 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:53:16.899435 | orchestrator | Saturday 14 February 2026 05:53:16 +0000 (0:00:00.838) 0:16:28.416 ***** 2026-02-14 05:53:16.899456 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590284 | orchestrator | 2026-02-14 05:53:58.590369 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:53:58.590377 | orchestrator | Saturday 14 February 2026 05:53:16 +0000 (0:00:00.795) 0:16:29.212 ***** 2026-02-14 05:53:58.590382 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590387 | orchestrator | 2026-02-14 05:53:58.590391 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:53:58.590409 | orchestrator | Saturday 14 February 2026 05:53:17 +0000 (0:00:00.791) 0:16:30.003 ***** 2026-02-14 05:53:58.590414 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590417 | orchestrator | 2026-02-14 05:53:58.590421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:53:58.590425 | orchestrator | Saturday 14 February 2026 05:53:18 +0000 (0:00:00.823) 0:16:30.827 ***** 2026-02-14 05:53:58.590429 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590434 | orchestrator | 2026-02-14 05:53:58.590438 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:53:58.590442 | orchestrator | Saturday 14 February 2026 05:53:19 +0000 (0:00:00.799) 0:16:31.627 ***** 2026-02-14 05:53:58.590446 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590450 | orchestrator | 2026-02-14 05:53:58.590454 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:53:58.590458 | orchestrator | Saturday 14 February 2026 05:53:20 +0000 (0:00:00.807) 0:16:32.435 ***** 2026-02-14 05:53:58.590461 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590465 | orchestrator | 2026-02-14 05:53:58.590469 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:53:58.590473 | orchestrator | Saturday 14 February 2026 05:53:20 +0000 (0:00:00.816) 0:16:33.252 ***** 2026-02-14 05:53:58.590476 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590480 | orchestrator | 2026-02-14 05:53:58.590484 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:53:58.590488 | orchestrator | Saturday 14 February 2026 05:53:21 +0000 (0:00:00.818) 0:16:34.070 ***** 2026-02-14 05:53:58.590492 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590495 | orchestrator | 2026-02-14 05:53:58.590499 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:53:58.590503 | orchestrator | Saturday 14 February 2026 05:53:22 +0000 (0:00:00.806) 0:16:34.877 ***** 2026-02-14 05:53:58.590507 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590510 | orchestrator | 2026-02-14 05:53:58.590514 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:53:58.590518 | orchestrator | Saturday 14 February 2026 05:53:23 +0000 (0:00:00.965) 0:16:35.842 ***** 2026-02-14 05:53:58.590522 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590526 | orchestrator | 2026-02-14 05:53:58.590529 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:53:58.590534 | orchestrator | Saturday 14 February 2026 05:53:24 +0000 (0:00:00.852) 0:16:36.694 ***** 2026-02-14 05:53:58.590537 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590541 | orchestrator | 2026-02-14 05:53:58.590545 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:53:58.590549 | orchestrator | Saturday 14 February 2026 05:53:25 +0000 (0:00:00.865) 0:16:37.560 ***** 2026-02-14 05:53:58.590552 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590556 | orchestrator | 2026-02-14 05:53:58.590560 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:53:58.590564 | orchestrator | Saturday 14 February 2026 05:53:26 +0000 (0:00:00.827) 0:16:38.387 ***** 2026-02-14 05:53:58.590568 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590572 | orchestrator | 2026-02-14 05:53:58.590576 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:53:58.590580 | orchestrator | Saturday 14 February 2026 05:53:26 +0000 (0:00:00.810) 0:16:39.198 ***** 2026-02-14 05:53:58.590584 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590588 | orchestrator | 2026-02-14 05:53:58.590592 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:53:58.590596 | orchestrator | Saturday 14 February 2026 05:53:27 +0000 (0:00:00.790) 0:16:39.989 ***** 2026-02-14 05:53:58.590599 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590603 | orchestrator | 2026-02-14 05:53:58.590607 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:53:58.590615 | orchestrator | Saturday 14 February 2026 05:53:28 +0000 (0:00:00.830) 0:16:40.819 ***** 2026-02-14 05:53:58.590618 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590622 | orchestrator | 2026-02-14 05:53:58.590626 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:53:58.590630 | orchestrator | Saturday 14 February 2026 05:53:29 +0000 (0:00:00.788) 0:16:41.607 ***** 2026-02-14 05:53:58.590634 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590637 | orchestrator | 2026-02-14 05:53:58.590653 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:53:58.590657 | orchestrator | Saturday 14 February 2026 05:53:30 +0000 (0:00:00.818) 0:16:42.426 ***** 2026-02-14 05:53:58.590660 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590664 | orchestrator | 2026-02-14 05:53:58.590668 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:53:58.590672 | orchestrator | Saturday 14 February 2026 05:53:30 +0000 (0:00:00.831) 0:16:43.258 ***** 2026-02-14 05:53:58.590676 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590680 | orchestrator | 2026-02-14 05:53:58.590683 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:53:58.590687 | orchestrator | Saturday 14 February 2026 05:53:32 +0000 (0:00:01.629) 0:16:44.887 ***** 2026-02-14 05:53:58.590691 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590695 | orchestrator | 2026-02-14 05:53:58.590699 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:53:58.590702 | orchestrator | Saturday 14 February 2026 05:53:34 +0000 (0:00:02.096) 0:16:46.984 ***** 2026-02-14 05:53:58.590706 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-14 05:53:58.590710 | orchestrator | 2026-02-14 05:53:58.590724 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 05:53:58.590728 | orchestrator | Saturday 14 February 2026 05:53:36 +0000 (0:00:01.477) 0:16:48.462 ***** 2026-02-14 05:53:58.590732 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590736 | orchestrator | 2026-02-14 05:53:58.590740 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 05:53:58.590743 | orchestrator | Saturday 14 February 2026 05:53:37 +0000 (0:00:01.233) 0:16:49.695 ***** 2026-02-14 05:53:58.590747 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590751 | orchestrator | 2026-02-14 05:53:58.590755 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 05:53:58.590759 | orchestrator | Saturday 14 February 2026 05:53:38 +0000 (0:00:01.211) 0:16:50.907 ***** 2026-02-14 05:53:58.590762 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 05:53:58.590766 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 05:53:58.590770 | orchestrator | 2026-02-14 05:53:58.590774 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 05:53:58.590778 | orchestrator | Saturday 14 February 2026 05:53:40 +0000 (0:00:01.861) 0:16:52.769 ***** 2026-02-14 05:53:58.590781 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590785 | orchestrator | 2026-02-14 05:53:58.590789 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 05:53:58.590793 | orchestrator | Saturday 14 February 2026 05:53:41 +0000 (0:00:01.536) 0:16:54.305 ***** 2026-02-14 05:53:58.590797 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590801 | orchestrator | 2026-02-14 05:53:58.590804 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 05:53:58.590808 | orchestrator | Saturday 14 February 2026 05:53:43 +0000 (0:00:01.153) 0:16:55.459 ***** 2026-02-14 05:53:58.590812 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590816 | orchestrator | 2026-02-14 05:53:58.590819 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:53:58.590823 | orchestrator | Saturday 14 February 2026 05:53:43 +0000 (0:00:00.802) 0:16:56.262 ***** 2026-02-14 05:53:58.590831 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590835 | orchestrator | 2026-02-14 05:53:58.590838 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:53:58.590842 | orchestrator | Saturday 14 February 2026 05:53:44 +0000 (0:00:00.801) 0:16:57.064 ***** 2026-02-14 05:53:58.590846 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-14 05:53:58.590851 | orchestrator | 2026-02-14 05:53:58.590855 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 05:53:58.590859 | orchestrator | Saturday 14 February 2026 05:53:45 +0000 (0:00:01.161) 0:16:58.226 ***** 2026-02-14 05:53:58.590864 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.590868 | orchestrator | 2026-02-14 05:53:58.590873 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 05:53:58.590897 | orchestrator | Saturday 14 February 2026 05:53:47 +0000 (0:00:01.758) 0:16:59.984 ***** 2026-02-14 05:53:58.590901 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 05:53:58.590906 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 05:53:58.590910 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 05:53:58.590914 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590919 | orchestrator | 2026-02-14 05:53:58.590923 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 05:53:58.590927 | orchestrator | Saturday 14 February 2026 05:53:48 +0000 (0:00:01.184) 0:17:01.169 ***** 2026-02-14 05:53:58.590932 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590936 | orchestrator | 2026-02-14 05:53:58.590940 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 05:53:58.590945 | orchestrator | Saturday 14 February 2026 05:53:50 +0000 (0:00:01.195) 0:17:02.365 ***** 2026-02-14 05:53:58.590949 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590953 | orchestrator | 2026-02-14 05:53:58.590957 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 05:53:58.590961 | orchestrator | Saturday 14 February 2026 05:53:51 +0000 (0:00:01.298) 0:17:03.663 ***** 2026-02-14 05:53:58.590966 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590970 | orchestrator | 2026-02-14 05:53:58.590974 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 05:53:58.590978 | orchestrator | Saturday 14 February 2026 05:53:52 +0000 (0:00:01.177) 0:17:04.841 ***** 2026-02-14 05:53:58.590983 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.590987 | orchestrator | 2026-02-14 05:53:58.590994 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 05:53:58.591015 | orchestrator | Saturday 14 February 2026 05:53:53 +0000 (0:00:01.153) 0:17:05.995 ***** 2026-02-14 05:53:58.591022 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:53:58.591029 | orchestrator | 2026-02-14 05:53:58.591035 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:53:58.591042 | orchestrator | Saturday 14 February 2026 05:53:54 +0000 (0:00:00.786) 0:17:06.781 ***** 2026-02-14 05:53:58.591048 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.591056 | orchestrator | 2026-02-14 05:53:58.591061 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:53:58.591065 | orchestrator | Saturday 14 February 2026 05:53:56 +0000 (0:00:02.197) 0:17:08.979 ***** 2026-02-14 05:53:58.591069 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:53:58.591074 | orchestrator | 2026-02-14 05:53:58.591078 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:53:58.591082 | orchestrator | Saturday 14 February 2026 05:53:57 +0000 (0:00:00.797) 0:17:09.777 ***** 2026-02-14 05:53:58.591087 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-14 05:53:58.591095 | orchestrator | 2026-02-14 05:53:58.591102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 05:54:36.267125 | orchestrator | Saturday 14 February 2026 05:53:58 +0000 (0:00:01.125) 0:17:10.902 ***** 2026-02-14 05:54:36.267246 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267264 | orchestrator | 2026-02-14 05:54:36.267277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 05:54:36.267288 | orchestrator | Saturday 14 February 2026 05:53:59 +0000 (0:00:01.141) 0:17:12.044 ***** 2026-02-14 05:54:36.267299 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267310 | orchestrator | 2026-02-14 05:54:36.267322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 05:54:36.267332 | orchestrator | Saturday 14 February 2026 05:54:00 +0000 (0:00:01.200) 0:17:13.245 ***** 2026-02-14 05:54:36.267343 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267354 | orchestrator | 2026-02-14 05:54:36.267365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 05:54:36.267376 | orchestrator | Saturday 14 February 2026 05:54:02 +0000 (0:00:01.152) 0:17:14.397 ***** 2026-02-14 05:54:36.267386 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267397 | orchestrator | 2026-02-14 05:54:36.267408 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 05:54:36.267419 | orchestrator | Saturday 14 February 2026 05:54:03 +0000 (0:00:01.183) 0:17:15.581 ***** 2026-02-14 05:54:36.267430 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267441 | orchestrator | 2026-02-14 05:54:36.267452 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 05:54:36.267462 | orchestrator | Saturday 14 February 2026 05:54:04 +0000 (0:00:01.188) 0:17:16.769 ***** 2026-02-14 05:54:36.267473 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267484 | orchestrator | 2026-02-14 05:54:36.267494 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 05:54:36.267505 | orchestrator | Saturday 14 February 2026 05:54:05 +0000 (0:00:01.274) 0:17:18.043 ***** 2026-02-14 05:54:36.267516 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267527 | orchestrator | 2026-02-14 05:54:36.267538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 05:54:36.267549 | orchestrator | Saturday 14 February 2026 05:54:06 +0000 (0:00:01.266) 0:17:19.310 ***** 2026-02-14 05:54:36.267559 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.267570 | orchestrator | 2026-02-14 05:54:36.267581 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 05:54:36.267594 | orchestrator | Saturday 14 February 2026 05:54:08 +0000 (0:00:01.215) 0:17:20.525 ***** 2026-02-14 05:54:36.267607 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:54:36.267621 | orchestrator | 2026-02-14 05:54:36.267633 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:54:36.267646 | orchestrator | Saturday 14 February 2026 05:54:09 +0000 (0:00:00.823) 0:17:21.349 ***** 2026-02-14 05:54:36.267658 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-14 05:54:36.267671 | orchestrator | 2026-02-14 05:54:36.267684 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 05:54:36.267696 | orchestrator | Saturday 14 February 2026 05:54:10 +0000 (0:00:01.172) 0:17:22.522 ***** 2026-02-14 05:54:36.267709 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-14 05:54:36.267721 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-14 05:54:36.267734 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-14 05:54:36.267757 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-14 05:54:36.267770 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-14 05:54:36.267782 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-14 05:54:36.267795 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-14 05:54:36.267807 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-14 05:54:36.267842 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 05:54:36.267856 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 05:54:36.267868 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 05:54:36.267880 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 05:54:36.267892 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 05:54:36.267905 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 05:54:36.267918 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-14 05:54:36.267931 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-14 05:54:36.267942 | orchestrator | 2026-02-14 05:54:36.267968 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:54:36.267979 | orchestrator | Saturday 14 February 2026 05:54:16 +0000 (0:00:06.332) 0:17:28.855 ***** 2026-02-14 05:54:36.268027 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268039 | orchestrator | 2026-02-14 05:54:36.268050 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:54:36.268061 | orchestrator | Saturday 14 February 2026 05:54:17 +0000 (0:00:00.862) 0:17:29.717 ***** 2026-02-14 05:54:36.268071 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268082 | orchestrator | 2026-02-14 05:54:36.268093 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:54:36.268104 | orchestrator | Saturday 14 February 2026 05:54:18 +0000 (0:00:00.848) 0:17:30.566 ***** 2026-02-14 05:54:36.268114 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268125 | orchestrator | 2026-02-14 05:54:36.268136 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:54:36.268146 | orchestrator | Saturday 14 February 2026 05:54:19 +0000 (0:00:00.804) 0:17:31.370 ***** 2026-02-14 05:54:36.268157 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268168 | orchestrator | 2026-02-14 05:54:36.268179 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:54:36.268209 | orchestrator | Saturday 14 February 2026 05:54:19 +0000 (0:00:00.781) 0:17:32.152 ***** 2026-02-14 05:54:36.268220 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268231 | orchestrator | 2026-02-14 05:54:36.268242 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:54:36.268252 | orchestrator | Saturday 14 February 2026 05:54:20 +0000 (0:00:00.847) 0:17:33.000 ***** 2026-02-14 05:54:36.268264 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268274 | orchestrator | 2026-02-14 05:54:36.268285 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:54:36.268296 | orchestrator | Saturday 14 February 2026 05:54:21 +0000 (0:00:00.884) 0:17:33.884 ***** 2026-02-14 05:54:36.268307 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268317 | orchestrator | 2026-02-14 05:54:36.268328 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:54:36.268339 | orchestrator | Saturday 14 February 2026 05:54:22 +0000 (0:00:00.796) 0:17:34.681 ***** 2026-02-14 05:54:36.268350 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268360 | orchestrator | 2026-02-14 05:54:36.268371 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:54:36.268381 | orchestrator | Saturday 14 February 2026 05:54:23 +0000 (0:00:00.784) 0:17:35.465 ***** 2026-02-14 05:54:36.268392 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268403 | orchestrator | 2026-02-14 05:54:36.268413 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:54:36.268424 | orchestrator | Saturday 14 February 2026 05:54:23 +0000 (0:00:00.781) 0:17:36.247 ***** 2026-02-14 05:54:36.268435 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268455 | orchestrator | 2026-02-14 05:54:36.268466 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:54:36.268476 | orchestrator | Saturday 14 February 2026 05:54:24 +0000 (0:00:00.817) 0:17:37.064 ***** 2026-02-14 05:54:36.268487 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268498 | orchestrator | 2026-02-14 05:54:36.268508 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 05:54:36.268519 | orchestrator | Saturday 14 February 2026 05:54:25 +0000 (0:00:00.837) 0:17:37.902 ***** 2026-02-14 05:54:36.268530 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268541 | orchestrator | 2026-02-14 05:54:36.268551 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 05:54:36.268562 | orchestrator | Saturday 14 February 2026 05:54:26 +0000 (0:00:00.780) 0:17:38.683 ***** 2026-02-14 05:54:36.268573 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268591 | orchestrator | 2026-02-14 05:54:36.268610 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 05:54:36.268629 | orchestrator | Saturday 14 February 2026 05:54:27 +0000 (0:00:00.935) 0:17:39.619 ***** 2026-02-14 05:54:36.268649 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268669 | orchestrator | 2026-02-14 05:54:36.268681 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 05:54:36.268692 | orchestrator | Saturday 14 February 2026 05:54:28 +0000 (0:00:00.779) 0:17:40.398 ***** 2026-02-14 05:54:36.268702 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268713 | orchestrator | 2026-02-14 05:54:36.268724 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 05:54:36.268734 | orchestrator | Saturday 14 February 2026 05:54:28 +0000 (0:00:00.921) 0:17:41.319 ***** 2026-02-14 05:54:36.268745 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268755 | orchestrator | 2026-02-14 05:54:36.268766 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 05:54:36.268776 | orchestrator | Saturday 14 February 2026 05:54:29 +0000 (0:00:00.811) 0:17:42.130 ***** 2026-02-14 05:54:36.268787 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268797 | orchestrator | 2026-02-14 05:54:36.268808 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:54:36.268820 | orchestrator | Saturday 14 February 2026 05:54:30 +0000 (0:00:00.807) 0:17:42.938 ***** 2026-02-14 05:54:36.268830 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268841 | orchestrator | 2026-02-14 05:54:36.268852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:54:36.268862 | orchestrator | Saturday 14 February 2026 05:54:31 +0000 (0:00:00.795) 0:17:43.734 ***** 2026-02-14 05:54:36.268873 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268884 | orchestrator | 2026-02-14 05:54:36.268894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:54:36.268911 | orchestrator | Saturday 14 February 2026 05:54:32 +0000 (0:00:00.924) 0:17:44.658 ***** 2026-02-14 05:54:36.268922 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268933 | orchestrator | 2026-02-14 05:54:36.268943 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:54:36.268954 | orchestrator | Saturday 14 February 2026 05:54:33 +0000 (0:00:00.813) 0:17:45.471 ***** 2026-02-14 05:54:36.268964 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.268975 | orchestrator | 2026-02-14 05:54:36.269007 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:54:36.269020 | orchestrator | Saturday 14 February 2026 05:54:33 +0000 (0:00:00.840) 0:17:46.311 ***** 2026-02-14 05:54:36.269031 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 05:54:36.269041 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 05:54:36.269052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 05:54:36.269076 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:54:36.269086 | orchestrator | 2026-02-14 05:54:36.269097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:54:36.269108 | orchestrator | Saturday 14 February 2026 05:54:35 +0000 (0:00:01.182) 0:17:47.493 ***** 2026-02-14 05:54:36.269119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 05:54:36.269137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 05:56:04.735812 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 05:56:04.735955 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.736057 | orchestrator | 2026-02-14 05:56:04.736077 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:56:04.736097 | orchestrator | Saturday 14 February 2026 05:54:36 +0000 (0:00:01.087) 0:17:48.581 ***** 2026-02-14 05:56:04.736113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 05:56:04.736129 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 05:56:04.736147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 05:56:04.736164 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.736179 | orchestrator | 2026-02-14 05:56:04.736195 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:56:04.736211 | orchestrator | Saturday 14 February 2026 05:54:37 +0000 (0:00:01.105) 0:17:49.686 ***** 2026-02-14 05:56:04.736227 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.736244 | orchestrator | 2026-02-14 05:56:04.736260 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:56:04.736276 | orchestrator | Saturday 14 February 2026 05:54:38 +0000 (0:00:00.832) 0:17:50.518 ***** 2026-02-14 05:56:04.736293 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-14 05:56:04.736310 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.736327 | orchestrator | 2026-02-14 05:56:04.736372 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 05:56:04.736391 | orchestrator | Saturday 14 February 2026 05:54:39 +0000 (0:00:00.928) 0:17:51.447 ***** 2026-02-14 05:56:04.736409 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.736427 | orchestrator | 2026-02-14 05:56:04.736443 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-14 05:56:04.736460 | orchestrator | Saturday 14 February 2026 05:54:40 +0000 (0:00:01.486) 0:17:52.934 ***** 2026-02-14 05:56:04.736477 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.736494 | orchestrator | 2026-02-14 05:56:04.736512 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-14 05:56:04.736530 | orchestrator | Saturday 14 February 2026 05:54:41 +0000 (0:00:00.798) 0:17:53.732 ***** 2026-02-14 05:56:04.736547 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-14 05:56:04.736565 | orchestrator | 2026-02-14 05:56:04.736582 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-14 05:56:04.736598 | orchestrator | Saturday 14 February 2026 05:54:42 +0000 (0:00:01.172) 0:17:54.904 ***** 2026-02-14 05:56:04.736616 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.736634 | orchestrator | 2026-02-14 05:56:04.736652 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-14 05:56:04.736668 | orchestrator | Saturday 14 February 2026 05:54:46 +0000 (0:00:03.921) 0:17:58.826 ***** 2026-02-14 05:56:04.736685 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.736702 | orchestrator | 2026-02-14 05:56:04.736718 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-14 05:56:04.736735 | orchestrator | Saturday 14 February 2026 05:54:47 +0000 (0:00:01.169) 0:17:59.996 ***** 2026-02-14 05:56:04.736752 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.736769 | orchestrator | 2026-02-14 05:56:04.736786 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-14 05:56:04.736803 | orchestrator | Saturday 14 February 2026 05:54:48 +0000 (0:00:01.201) 0:18:01.198 ***** 2026-02-14 05:56:04.736858 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.736877 | orchestrator | 2026-02-14 05:56:04.736893 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-14 05:56:04.736910 | orchestrator | Saturday 14 February 2026 05:54:50 +0000 (0:00:01.155) 0:18:02.353 ***** 2026-02-14 05:56:04.736926 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:56:04.736942 | orchestrator | 2026-02-14 05:56:04.736958 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-14 05:56:04.737003 | orchestrator | Saturday 14 February 2026 05:54:52 +0000 (0:00:02.034) 0:18:04.387 ***** 2026-02-14 05:56:04.737020 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737036 | orchestrator | 2026-02-14 05:56:04.737052 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-14 05:56:04.737068 | orchestrator | Saturday 14 February 2026 05:54:53 +0000 (0:00:01.670) 0:18:06.058 ***** 2026-02-14 05:56:04.737086 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737101 | orchestrator | 2026-02-14 05:56:04.737117 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-14 05:56:04.737134 | orchestrator | Saturday 14 February 2026 05:54:55 +0000 (0:00:01.616) 0:18:07.674 ***** 2026-02-14 05:56:04.737171 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737190 | orchestrator | 2026-02-14 05:56:04.737207 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-14 05:56:04.737224 | orchestrator | Saturday 14 February 2026 05:54:56 +0000 (0:00:01.600) 0:18:09.274 ***** 2026-02-14 05:56:04.737240 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:56:04.737256 | orchestrator | 2026-02-14 05:56:04.737274 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-14 05:56:04.737291 | orchestrator | Saturday 14 February 2026 05:54:58 +0000 (0:00:01.675) 0:18:10.950 ***** 2026-02-14 05:56:04.737306 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 05:56:04.737320 | orchestrator | 2026-02-14 05:56:04.737334 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-14 05:56:04.737348 | orchestrator | Saturday 14 February 2026 05:55:00 +0000 (0:00:01.632) 0:18:12.583 ***** 2026-02-14 05:56:04.737362 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 05:56:04.737375 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-14 05:56:04.737388 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-14 05:56:04.737401 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-14 05:56:04.737415 | orchestrator | 2026-02-14 05:56:04.737451 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-14 05:56:04.737467 | orchestrator | Saturday 14 February 2026 05:55:04 +0000 (0:00:03.916) 0:18:16.499 ***** 2026-02-14 05:56:04.737481 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:56:04.737495 | orchestrator | 2026-02-14 05:56:04.737508 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-14 05:56:04.737521 | orchestrator | Saturday 14 February 2026 05:55:06 +0000 (0:00:02.147) 0:18:18.647 ***** 2026-02-14 05:56:04.737534 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737548 | orchestrator | 2026-02-14 05:56:04.737562 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-14 05:56:04.737574 | orchestrator | Saturday 14 February 2026 05:55:07 +0000 (0:00:01.153) 0:18:19.801 ***** 2026-02-14 05:56:04.737588 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737601 | orchestrator | 2026-02-14 05:56:04.737613 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-14 05:56:04.737626 | orchestrator | Saturday 14 February 2026 05:55:08 +0000 (0:00:01.294) 0:18:21.095 ***** 2026-02-14 05:56:04.737639 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737654 | orchestrator | 2026-02-14 05:56:04.737668 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-14 05:56:04.737681 | orchestrator | Saturday 14 February 2026 05:55:10 +0000 (0:00:01.840) 0:18:22.936 ***** 2026-02-14 05:56:04.737709 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.737721 | orchestrator | 2026-02-14 05:56:04.737734 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-14 05:56:04.737746 | orchestrator | Saturday 14 February 2026 05:55:12 +0000 (0:00:01.479) 0:18:24.415 ***** 2026-02-14 05:56:04.737760 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.737773 | orchestrator | 2026-02-14 05:56:04.737787 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-14 05:56:04.737801 | orchestrator | Saturday 14 February 2026 05:55:12 +0000 (0:00:00.776) 0:18:25.192 ***** 2026-02-14 05:56:04.737815 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-14 05:56:04.737829 | orchestrator | 2026-02-14 05:56:04.737841 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-14 05:56:04.737855 | orchestrator | Saturday 14 February 2026 05:55:13 +0000 (0:00:01.124) 0:18:26.316 ***** 2026-02-14 05:56:04.737868 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.737882 | orchestrator | 2026-02-14 05:56:04.737895 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-14 05:56:04.737909 | orchestrator | Saturday 14 February 2026 05:55:15 +0000 (0:00:01.148) 0:18:27.464 ***** 2026-02-14 05:56:04.737923 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.737932 | orchestrator | 2026-02-14 05:56:04.737940 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-14 05:56:04.737948 | orchestrator | Saturday 14 February 2026 05:55:16 +0000 (0:00:01.165) 0:18:28.630 ***** 2026-02-14 05:56:04.737955 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-14 05:56:04.737963 | orchestrator | 2026-02-14 05:56:04.737995 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-14 05:56:04.738003 | orchestrator | Saturday 14 February 2026 05:55:17 +0000 (0:00:01.108) 0:18:29.738 ***** 2026-02-14 05:56:04.738011 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.738068 | orchestrator | 2026-02-14 05:56:04.738076 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-14 05:56:04.738084 | orchestrator | Saturday 14 February 2026 05:55:20 +0000 (0:00:02.776) 0:18:32.514 ***** 2026-02-14 05:56:04.738092 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.738100 | orchestrator | 2026-02-14 05:56:04.738107 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-14 05:56:04.738115 | orchestrator | Saturday 14 February 2026 05:55:22 +0000 (0:00:02.061) 0:18:34.576 ***** 2026-02-14 05:56:04.738123 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.738161 | orchestrator | 2026-02-14 05:56:04.738170 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-14 05:56:04.738178 | orchestrator | Saturday 14 February 2026 05:55:24 +0000 (0:00:02.449) 0:18:37.025 ***** 2026-02-14 05:56:04.738186 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:56:04.738194 | orchestrator | 2026-02-14 05:56:04.738202 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-14 05:56:04.738209 | orchestrator | Saturday 14 February 2026 05:55:27 +0000 (0:00:02.973) 0:18:39.998 ***** 2026-02-14 05:56:04.738217 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-14 05:56:04.738225 | orchestrator | 2026-02-14 05:56:04.738233 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-14 05:56:04.738249 | orchestrator | Saturday 14 February 2026 05:55:28 +0000 (0:00:01.203) 0:18:41.202 ***** 2026-02-14 05:56:04.738257 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-14 05:56:04.738265 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.738273 | orchestrator | 2026-02-14 05:56:04.738281 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-14 05:56:04.738289 | orchestrator | Saturday 14 February 2026 05:55:51 +0000 (0:00:22.958) 0:19:04.161 ***** 2026-02-14 05:56:04.738307 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:04.738315 | orchestrator | 2026-02-14 05:56:04.738324 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-14 05:56:04.738332 | orchestrator | Saturday 14 February 2026 05:55:54 +0000 (0:00:02.656) 0:19:06.817 ***** 2026-02-14 05:56:04.738339 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:04.738347 | orchestrator | 2026-02-14 05:56:04.738355 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-14 05:56:04.738363 | orchestrator | Saturday 14 February 2026 05:55:55 +0000 (0:00:00.823) 0:19:07.641 ***** 2026-02-14 05:56:04.738386 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:56:42.248115 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-14 05:56:42.248271 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-14 05:56:42.248299 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-14 05:56:42.248322 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-14 05:56:42.248344 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__00bc4e30ddc9572289d35718eb4b72edbacd0b8b'}])  2026-02-14 05:56:42.248367 | orchestrator | 2026-02-14 05:56:42.248390 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-14 05:56:42.248412 | orchestrator | Saturday 14 February 2026 05:56:04 +0000 (0:00:09.408) 0:19:17.050 ***** 2026-02-14 05:56:42.248432 | orchestrator | changed: [testbed-node-2] 2026-02-14 05:56:42.248452 | orchestrator | 2026-02-14 05:56:42.248473 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 05:56:42.248494 | orchestrator | Saturday 14 February 2026 05:56:06 +0000 (0:00:02.196) 0:19:19.247 ***** 2026-02-14 05:56:42.248514 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 05:56:42.248535 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-14 05:56:42.248557 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-14 05:56:42.248617 | orchestrator | 2026-02-14 05:56:42.248640 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 05:56:42.248663 | orchestrator | Saturday 14 February 2026 05:56:08 +0000 (0:00:02.005) 0:19:21.252 ***** 2026-02-14 05:56:42.248682 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 05:56:42.248704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 05:56:42.248723 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 05:56:42.248763 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:42.248783 | orchestrator | 2026-02-14 05:56:42.248802 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-14 05:56:42.248824 | orchestrator | Saturday 14 February 2026 05:56:10 +0000 (0:00:01.099) 0:19:22.352 ***** 2026-02-14 05:56:42.248846 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:56:42.248866 | orchestrator | 2026-02-14 05:56:42.248886 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-14 05:56:42.248906 | orchestrator | Saturday 14 February 2026 05:56:10 +0000 (0:00:00.787) 0:19:23.139 ***** 2026-02-14 05:56:42.248924 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:42.248943 | orchestrator | 2026-02-14 05:56:42.248993 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-14 05:56:42.249015 | orchestrator | 2026-02-14 05:56:42.249033 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-14 05:56:42.249052 | orchestrator | Saturday 14 February 2026 05:56:14 +0000 (0:00:03.439) 0:19:26.579 ***** 2026-02-14 05:56:42.249071 | orchestrator | ok: [testbed-node-0] 2026-02-14 05:56:42.249090 | orchestrator | ok: [testbed-node-1] 2026-02-14 05:56:42.249110 | orchestrator | ok: [testbed-node-2] 2026-02-14 05:56:42.249127 | orchestrator | 2026-02-14 05:56:42.249144 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-14 05:56:42.249163 | orchestrator | 2026-02-14 05:56:42.249182 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 05:56:42.249201 | orchestrator | Saturday 14 February 2026 05:56:15 +0000 (0:00:01.683) 0:19:28.262 ***** 2026-02-14 05:56:42.249220 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249239 | orchestrator | 2026-02-14 05:56:42.249256 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:56:42.249293 | orchestrator | Saturday 14 February 2026 05:56:17 +0000 (0:00:01.174) 0:19:29.437 ***** 2026-02-14 05:56:42.249304 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249315 | orchestrator | 2026-02-14 05:56:42.249326 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:56:42.249337 | orchestrator | Saturday 14 February 2026 05:56:18 +0000 (0:00:01.213) 0:19:30.651 ***** 2026-02-14 05:56:42.249348 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249359 | orchestrator | 2026-02-14 05:56:42.249369 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:56:42.249380 | orchestrator | Saturday 14 February 2026 05:56:19 +0000 (0:00:01.178) 0:19:31.830 ***** 2026-02-14 05:56:42.249391 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249402 | orchestrator | 2026-02-14 05:56:42.249412 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:56:42.249423 | orchestrator | Saturday 14 February 2026 05:56:20 +0000 (0:00:01.169) 0:19:33.000 ***** 2026-02-14 05:56:42.249434 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249445 | orchestrator | 2026-02-14 05:56:42.249456 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:56:42.249466 | orchestrator | Saturday 14 February 2026 05:56:21 +0000 (0:00:01.291) 0:19:34.292 ***** 2026-02-14 05:56:42.249477 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249488 | orchestrator | 2026-02-14 05:56:42.249499 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:56:42.249510 | orchestrator | Saturday 14 February 2026 05:56:23 +0000 (0:00:01.164) 0:19:35.456 ***** 2026-02-14 05:56:42.249520 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249546 | orchestrator | 2026-02-14 05:56:42.249557 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:56:42.249568 | orchestrator | Saturday 14 February 2026 05:56:24 +0000 (0:00:01.208) 0:19:36.665 ***** 2026-02-14 05:56:42.249579 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249590 | orchestrator | 2026-02-14 05:56:42.249600 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:56:42.249611 | orchestrator | Saturday 14 February 2026 05:56:25 +0000 (0:00:01.172) 0:19:37.837 ***** 2026-02-14 05:56:42.249622 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249633 | orchestrator | 2026-02-14 05:56:42.249645 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:56:42.249656 | orchestrator | Saturday 14 February 2026 05:56:26 +0000 (0:00:01.129) 0:19:38.967 ***** 2026-02-14 05:56:42.249666 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249677 | orchestrator | 2026-02-14 05:56:42.249688 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:56:42.249699 | orchestrator | Saturday 14 February 2026 05:56:27 +0000 (0:00:01.130) 0:19:40.098 ***** 2026-02-14 05:56:42.249709 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249720 | orchestrator | 2026-02-14 05:56:42.249731 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:56:42.249742 | orchestrator | Saturday 14 February 2026 05:56:28 +0000 (0:00:01.145) 0:19:41.243 ***** 2026-02-14 05:56:42.249753 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249763 | orchestrator | 2026-02-14 05:56:42.249774 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:56:42.249785 | orchestrator | Saturday 14 February 2026 05:56:30 +0000 (0:00:01.141) 0:19:42.385 ***** 2026-02-14 05:56:42.249796 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249807 | orchestrator | 2026-02-14 05:56:42.249817 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:56:42.249828 | orchestrator | Saturday 14 February 2026 05:56:31 +0000 (0:00:01.214) 0:19:43.599 ***** 2026-02-14 05:56:42.249839 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249849 | orchestrator | 2026-02-14 05:56:42.249860 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:56:42.249871 | orchestrator | Saturday 14 February 2026 05:56:32 +0000 (0:00:01.145) 0:19:44.745 ***** 2026-02-14 05:56:42.249881 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249892 | orchestrator | 2026-02-14 05:56:42.249903 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:56:42.249914 | orchestrator | Saturday 14 February 2026 05:56:33 +0000 (0:00:01.210) 0:19:45.955 ***** 2026-02-14 05:56:42.249924 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.249935 | orchestrator | 2026-02-14 05:56:42.249955 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:56:42.250100 | orchestrator | Saturday 14 February 2026 05:56:34 +0000 (0:00:01.277) 0:19:47.233 ***** 2026-02-14 05:56:42.250114 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.250125 | orchestrator | 2026-02-14 05:56:42.250136 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:56:42.250147 | orchestrator | Saturday 14 February 2026 05:56:36 +0000 (0:00:01.335) 0:19:48.569 ***** 2026-02-14 05:56:42.250158 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.250169 | orchestrator | 2026-02-14 05:56:42.250180 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:56:42.250191 | orchestrator | Saturday 14 February 2026 05:56:37 +0000 (0:00:01.214) 0:19:49.784 ***** 2026-02-14 05:56:42.250202 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.250213 | orchestrator | 2026-02-14 05:56:42.250224 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:56:42.250235 | orchestrator | Saturday 14 February 2026 05:56:38 +0000 (0:00:01.241) 0:19:51.025 ***** 2026-02-14 05:56:42.250254 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.250265 | orchestrator | 2026-02-14 05:56:42.250276 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:56:42.250287 | orchestrator | Saturday 14 February 2026 05:56:39 +0000 (0:00:01.152) 0:19:52.177 ***** 2026-02-14 05:56:42.250298 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:56:42.250309 | orchestrator | 2026-02-14 05:56:42.250320 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:56:42.250331 | orchestrator | Saturday 14 February 2026 05:56:41 +0000 (0:00:01.198) 0:19:53.376 ***** 2026-02-14 05:56:42.250352 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714344 | orchestrator | 2026-02-14 05:57:27.714461 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:57:27.714477 | orchestrator | Saturday 14 February 2026 05:56:42 +0000 (0:00:01.186) 0:19:54.563 ***** 2026-02-14 05:57:27.714489 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714501 | orchestrator | 2026-02-14 05:57:27.714513 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:57:27.714524 | orchestrator | Saturday 14 February 2026 05:56:43 +0000 (0:00:01.162) 0:19:55.726 ***** 2026-02-14 05:57:27.714535 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714546 | orchestrator | 2026-02-14 05:57:27.714556 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:57:27.714567 | orchestrator | Saturday 14 February 2026 05:56:44 +0000 (0:00:01.161) 0:19:56.888 ***** 2026-02-14 05:57:27.714578 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714589 | orchestrator | 2026-02-14 05:57:27.714600 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:57:27.714611 | orchestrator | Saturday 14 February 2026 05:56:45 +0000 (0:00:01.192) 0:19:58.081 ***** 2026-02-14 05:57:27.714622 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714633 | orchestrator | 2026-02-14 05:57:27.714643 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:57:27.714654 | orchestrator | Saturday 14 February 2026 05:56:46 +0000 (0:00:01.148) 0:19:59.229 ***** 2026-02-14 05:57:27.714665 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714676 | orchestrator | 2026-02-14 05:57:27.714686 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:57:27.714697 | orchestrator | Saturday 14 February 2026 05:56:48 +0000 (0:00:01.182) 0:20:00.412 ***** 2026-02-14 05:57:27.714708 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714719 | orchestrator | 2026-02-14 05:57:27.714729 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:57:27.714740 | orchestrator | Saturday 14 February 2026 05:56:49 +0000 (0:00:01.119) 0:20:01.531 ***** 2026-02-14 05:57:27.714751 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714762 | orchestrator | 2026-02-14 05:57:27.714773 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:57:27.714783 | orchestrator | Saturday 14 February 2026 05:56:50 +0000 (0:00:01.237) 0:20:02.769 ***** 2026-02-14 05:57:27.714794 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714805 | orchestrator | 2026-02-14 05:57:27.714816 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:57:27.714828 | orchestrator | Saturday 14 February 2026 05:56:51 +0000 (0:00:01.299) 0:20:04.069 ***** 2026-02-14 05:57:27.714839 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714849 | orchestrator | 2026-02-14 05:57:27.714860 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:57:27.714871 | orchestrator | Saturday 14 February 2026 05:56:52 +0000 (0:00:01.223) 0:20:05.292 ***** 2026-02-14 05:57:27.714884 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.714897 | orchestrator | 2026-02-14 05:57:27.714910 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:57:27.714922 | orchestrator | Saturday 14 February 2026 05:56:54 +0000 (0:00:01.226) 0:20:06.519 ***** 2026-02-14 05:57:27.714988 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715002 | orchestrator | 2026-02-14 05:57:27.715015 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:57:27.715027 | orchestrator | Saturday 14 February 2026 05:56:55 +0000 (0:00:01.181) 0:20:07.700 ***** 2026-02-14 05:57:27.715039 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715051 | orchestrator | 2026-02-14 05:57:27.715063 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:57:27.715075 | orchestrator | Saturday 14 February 2026 05:56:56 +0000 (0:00:01.127) 0:20:08.828 ***** 2026-02-14 05:57:27.715087 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715100 | orchestrator | 2026-02-14 05:57:27.715111 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:57:27.715124 | orchestrator | Saturday 14 February 2026 05:56:57 +0000 (0:00:01.170) 0:20:09.998 ***** 2026-02-14 05:57:27.715136 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715148 | orchestrator | 2026-02-14 05:57:27.715160 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:57:27.715187 | orchestrator | Saturday 14 February 2026 05:56:58 +0000 (0:00:01.139) 0:20:11.138 ***** 2026-02-14 05:57:27.715199 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715212 | orchestrator | 2026-02-14 05:57:27.715224 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:57:27.715236 | orchestrator | Saturday 14 February 2026 05:56:59 +0000 (0:00:01.160) 0:20:12.298 ***** 2026-02-14 05:57:27.715246 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715257 | orchestrator | 2026-02-14 05:57:27.715269 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:57:27.715279 | orchestrator | Saturday 14 February 2026 05:57:01 +0000 (0:00:01.147) 0:20:13.446 ***** 2026-02-14 05:57:27.715290 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715301 | orchestrator | 2026-02-14 05:57:27.715312 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:57:27.715323 | orchestrator | Saturday 14 February 2026 05:57:02 +0000 (0:00:01.179) 0:20:14.625 ***** 2026-02-14 05:57:27.715334 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715345 | orchestrator | 2026-02-14 05:57:27.715356 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:57:27.715366 | orchestrator | Saturday 14 February 2026 05:57:03 +0000 (0:00:01.143) 0:20:15.769 ***** 2026-02-14 05:57:27.715377 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715388 | orchestrator | 2026-02-14 05:57:27.715398 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:57:27.715409 | orchestrator | Saturday 14 February 2026 05:57:04 +0000 (0:00:01.224) 0:20:16.993 ***** 2026-02-14 05:57:27.715437 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715449 | orchestrator | 2026-02-14 05:57:27.715460 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:57:27.715471 | orchestrator | Saturday 14 February 2026 05:57:05 +0000 (0:00:01.312) 0:20:18.306 ***** 2026-02-14 05:57:27.715482 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715492 | orchestrator | 2026-02-14 05:57:27.715503 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:57:27.715514 | orchestrator | Saturday 14 February 2026 05:57:07 +0000 (0:00:01.215) 0:20:19.521 ***** 2026-02-14 05:57:27.715525 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715535 | orchestrator | 2026-02-14 05:57:27.715546 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 05:57:27.715557 | orchestrator | Saturday 14 February 2026 05:57:08 +0000 (0:00:01.143) 0:20:20.665 ***** 2026-02-14 05:57:27.715567 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715578 | orchestrator | 2026-02-14 05:57:27.715589 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 05:57:27.715608 | orchestrator | Saturday 14 February 2026 05:57:09 +0000 (0:00:01.145) 0:20:21.811 ***** 2026-02-14 05:57:27.715619 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715629 | orchestrator | 2026-02-14 05:57:27.715640 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 05:57:27.715651 | orchestrator | Saturday 14 February 2026 05:57:10 +0000 (0:00:01.274) 0:20:23.085 ***** 2026-02-14 05:57:27.715662 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715673 | orchestrator | 2026-02-14 05:57:27.715683 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 05:57:27.715694 | orchestrator | Saturday 14 February 2026 05:57:12 +0000 (0:00:01.243) 0:20:24.329 ***** 2026-02-14 05:57:27.715705 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715716 | orchestrator | 2026-02-14 05:57:27.715726 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 05:57:27.715737 | orchestrator | Saturday 14 February 2026 05:57:13 +0000 (0:00:01.273) 0:20:25.602 ***** 2026-02-14 05:57:27.715748 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715759 | orchestrator | 2026-02-14 05:57:27.715769 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 05:57:27.715780 | orchestrator | Saturday 14 February 2026 05:57:14 +0000 (0:00:01.133) 0:20:26.735 ***** 2026-02-14 05:57:27.715791 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715802 | orchestrator | 2026-02-14 05:57:27.715812 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:57:27.715825 | orchestrator | Saturday 14 February 2026 05:57:15 +0000 (0:00:01.137) 0:20:27.873 ***** 2026-02-14 05:57:27.715835 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715846 | orchestrator | 2026-02-14 05:57:27.715857 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:57:27.715868 | orchestrator | Saturday 14 February 2026 05:57:16 +0000 (0:00:01.179) 0:20:29.052 ***** 2026-02-14 05:57:27.715879 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715890 | orchestrator | 2026-02-14 05:57:27.715901 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:57:27.715912 | orchestrator | Saturday 14 February 2026 05:57:17 +0000 (0:00:01.181) 0:20:30.234 ***** 2026-02-14 05:57:27.715922 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.715933 | orchestrator | 2026-02-14 05:57:27.715944 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:57:27.715993 | orchestrator | Saturday 14 February 2026 05:57:19 +0000 (0:00:01.184) 0:20:31.418 ***** 2026-02-14 05:57:27.716006 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.716017 | orchestrator | 2026-02-14 05:57:27.716027 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:57:27.716038 | orchestrator | Saturday 14 February 2026 05:57:20 +0000 (0:00:01.162) 0:20:32.581 ***** 2026-02-14 05:57:27.716049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:57:27.716060 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:57:27.716071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:57:27.716081 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.716092 | orchestrator | 2026-02-14 05:57:27.716108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:57:27.716119 | orchestrator | Saturday 14 February 2026 05:57:22 +0000 (0:00:02.043) 0:20:34.624 ***** 2026-02-14 05:57:27.716130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:57:27.716141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:57:27.716151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:57:27.716162 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.716173 | orchestrator | 2026-02-14 05:57:27.716183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:57:27.716201 | orchestrator | Saturday 14 February 2026 05:57:23 +0000 (0:00:01.434) 0:20:36.059 ***** 2026-02-14 05:57:27.716212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 05:57:27.716223 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 05:57:27.716233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 05:57:27.716244 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.716255 | orchestrator | 2026-02-14 05:57:27.716265 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:57:27.716276 | orchestrator | Saturday 14 February 2026 05:57:25 +0000 (0:00:01.461) 0:20:37.520 ***** 2026-02-14 05:57:27.716287 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:57:27.716298 | orchestrator | 2026-02-14 05:57:27.716308 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:57:27.716319 | orchestrator | Saturday 14 February 2026 05:57:26 +0000 (0:00:01.198) 0:20:38.719 ***** 2026-02-14 05:57:27.716331 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-14 05:57:27.716349 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527505 | orchestrator | 2026-02-14 05:58:02.527626 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 05:58:02.527644 | orchestrator | Saturday 14 February 2026 05:57:27 +0000 (0:00:01.309) 0:20:40.029 ***** 2026-02-14 05:58:02.527657 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527669 | orchestrator | 2026-02-14 05:58:02.527680 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 05:58:02.527691 | orchestrator | Saturday 14 February 2026 05:57:28 +0000 (0:00:01.161) 0:20:41.191 ***** 2026-02-14 05:58:02.527702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 05:58:02.527714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 05:58:02.527725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 05:58:02.527735 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527746 | orchestrator | 2026-02-14 05:58:02.527757 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 05:58:02.527768 | orchestrator | Saturday 14 February 2026 05:57:30 +0000 (0:00:01.487) 0:20:42.679 ***** 2026-02-14 05:58:02.527779 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527789 | orchestrator | 2026-02-14 05:58:02.527800 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 05:58:02.527811 | orchestrator | Saturday 14 February 2026 05:57:31 +0000 (0:00:01.152) 0:20:43.832 ***** 2026-02-14 05:58:02.527822 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527832 | orchestrator | 2026-02-14 05:58:02.527843 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 05:58:02.527854 | orchestrator | Saturday 14 February 2026 05:57:32 +0000 (0:00:01.132) 0:20:44.964 ***** 2026-02-14 05:58:02.527865 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527876 | orchestrator | 2026-02-14 05:58:02.527887 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 05:58:02.527897 | orchestrator | Saturday 14 February 2026 05:57:33 +0000 (0:00:01.168) 0:20:46.133 ***** 2026-02-14 05:58:02.527908 | orchestrator | skipping: [testbed-node-0] 2026-02-14 05:58:02.527919 | orchestrator | 2026-02-14 05:58:02.527930 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-14 05:58:02.527941 | orchestrator | 2026-02-14 05:58:02.527984 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 05:58:02.527996 | orchestrator | Saturday 14 February 2026 05:57:34 +0000 (0:00:01.154) 0:20:47.287 ***** 2026-02-14 05:58:02.528007 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528018 | orchestrator | 2026-02-14 05:58:02.528029 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:58:02.528043 | orchestrator | Saturday 14 February 2026 05:57:35 +0000 (0:00:01.003) 0:20:48.291 ***** 2026-02-14 05:58:02.528078 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528091 | orchestrator | 2026-02-14 05:58:02.528103 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:58:02.528116 | orchestrator | Saturday 14 February 2026 05:57:36 +0000 (0:00:00.868) 0:20:49.160 ***** 2026-02-14 05:58:02.528128 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528142 | orchestrator | 2026-02-14 05:58:02.528154 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:58:02.528167 | orchestrator | Saturday 14 February 2026 05:57:37 +0000 (0:00:00.831) 0:20:49.991 ***** 2026-02-14 05:58:02.528179 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528192 | orchestrator | 2026-02-14 05:58:02.528210 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:58:02.528228 | orchestrator | Saturday 14 February 2026 05:57:38 +0000 (0:00:00.823) 0:20:50.814 ***** 2026-02-14 05:58:02.528248 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528265 | orchestrator | 2026-02-14 05:58:02.528283 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:58:02.528301 | orchestrator | Saturday 14 February 2026 05:57:39 +0000 (0:00:00.810) 0:20:51.625 ***** 2026-02-14 05:58:02.528319 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528337 | orchestrator | 2026-02-14 05:58:02.528353 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:58:02.528371 | orchestrator | Saturday 14 February 2026 05:57:40 +0000 (0:00:00.824) 0:20:52.450 ***** 2026-02-14 05:58:02.528390 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528409 | orchestrator | 2026-02-14 05:58:02.528447 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:58:02.528467 | orchestrator | Saturday 14 February 2026 05:57:40 +0000 (0:00:00.842) 0:20:53.292 ***** 2026-02-14 05:58:02.528485 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528502 | orchestrator | 2026-02-14 05:58:02.528521 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:58:02.528541 | orchestrator | Saturday 14 February 2026 05:57:41 +0000 (0:00:00.831) 0:20:54.123 ***** 2026-02-14 05:58:02.528557 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528578 | orchestrator | 2026-02-14 05:58:02.528596 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:58:02.528613 | orchestrator | Saturday 14 February 2026 05:57:42 +0000 (0:00:00.818) 0:20:54.942 ***** 2026-02-14 05:58:02.528631 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528650 | orchestrator | 2026-02-14 05:58:02.528669 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:58:02.528688 | orchestrator | Saturday 14 February 2026 05:57:43 +0000 (0:00:00.790) 0:20:55.732 ***** 2026-02-14 05:58:02.528707 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528725 | orchestrator | 2026-02-14 05:58:02.528745 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:58:02.528764 | orchestrator | Saturday 14 February 2026 05:57:44 +0000 (0:00:00.793) 0:20:56.526 ***** 2026-02-14 05:58:02.528782 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528800 | orchestrator | 2026-02-14 05:58:02.528819 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:58:02.528838 | orchestrator | Saturday 14 February 2026 05:57:45 +0000 (0:00:00.984) 0:20:57.510 ***** 2026-02-14 05:58:02.528856 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.528876 | orchestrator | 2026-02-14 05:58:02.528921 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:58:02.528941 | orchestrator | Saturday 14 February 2026 05:57:45 +0000 (0:00:00.805) 0:20:58.316 ***** 2026-02-14 05:58:02.529005 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529025 | orchestrator | 2026-02-14 05:58:02.529043 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:58:02.529061 | orchestrator | Saturday 14 February 2026 05:57:46 +0000 (0:00:00.787) 0:20:59.104 ***** 2026-02-14 05:58:02.529100 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529118 | orchestrator | 2026-02-14 05:58:02.529136 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:58:02.529151 | orchestrator | Saturday 14 February 2026 05:57:47 +0000 (0:00:00.836) 0:20:59.940 ***** 2026-02-14 05:58:02.529162 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529172 | orchestrator | 2026-02-14 05:58:02.529183 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:58:02.529194 | orchestrator | Saturday 14 February 2026 05:57:48 +0000 (0:00:00.774) 0:21:00.714 ***** 2026-02-14 05:58:02.529205 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529216 | orchestrator | 2026-02-14 05:58:02.529227 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:58:02.529237 | orchestrator | Saturday 14 February 2026 05:57:49 +0000 (0:00:00.787) 0:21:01.502 ***** 2026-02-14 05:58:02.529248 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529259 | orchestrator | 2026-02-14 05:58:02.529269 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:58:02.529280 | orchestrator | Saturday 14 February 2026 05:57:49 +0000 (0:00:00.786) 0:21:02.289 ***** 2026-02-14 05:58:02.529291 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529302 | orchestrator | 2026-02-14 05:58:02.529313 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:58:02.529324 | orchestrator | Saturday 14 February 2026 05:57:50 +0000 (0:00:00.870) 0:21:03.159 ***** 2026-02-14 05:58:02.529335 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529346 | orchestrator | 2026-02-14 05:58:02.529357 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:58:02.529367 | orchestrator | Saturday 14 February 2026 05:57:51 +0000 (0:00:00.802) 0:21:03.961 ***** 2026-02-14 05:58:02.529378 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529389 | orchestrator | 2026-02-14 05:58:02.529399 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:58:02.529410 | orchestrator | Saturday 14 February 2026 05:57:52 +0000 (0:00:00.808) 0:21:04.770 ***** 2026-02-14 05:58:02.529421 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529431 | orchestrator | 2026-02-14 05:58:02.529442 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:58:02.529453 | orchestrator | Saturday 14 February 2026 05:57:53 +0000 (0:00:00.860) 0:21:05.631 ***** 2026-02-14 05:58:02.529464 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529474 | orchestrator | 2026-02-14 05:58:02.529485 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:58:02.529496 | orchestrator | Saturday 14 February 2026 05:57:54 +0000 (0:00:00.821) 0:21:06.452 ***** 2026-02-14 05:58:02.529507 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529518 | orchestrator | 2026-02-14 05:58:02.529528 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:58:02.529539 | orchestrator | Saturday 14 February 2026 05:57:55 +0000 (0:00:01.105) 0:21:07.557 ***** 2026-02-14 05:58:02.529550 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529561 | orchestrator | 2026-02-14 05:58:02.529572 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:58:02.529583 | orchestrator | Saturday 14 February 2026 05:57:56 +0000 (0:00:00.800) 0:21:08.358 ***** 2026-02-14 05:58:02.529594 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529605 | orchestrator | 2026-02-14 05:58:02.529615 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:58:02.529626 | orchestrator | Saturday 14 February 2026 05:57:56 +0000 (0:00:00.768) 0:21:09.126 ***** 2026-02-14 05:58:02.529637 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529648 | orchestrator | 2026-02-14 05:58:02.529665 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:58:02.529677 | orchestrator | Saturday 14 February 2026 05:57:57 +0000 (0:00:00.798) 0:21:09.924 ***** 2026-02-14 05:58:02.529695 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529706 | orchestrator | 2026-02-14 05:58:02.529717 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:58:02.529728 | orchestrator | Saturday 14 February 2026 05:57:58 +0000 (0:00:00.819) 0:21:10.744 ***** 2026-02-14 05:58:02.529738 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529749 | orchestrator | 2026-02-14 05:58:02.529760 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:58:02.529771 | orchestrator | Saturday 14 February 2026 05:57:59 +0000 (0:00:00.852) 0:21:11.597 ***** 2026-02-14 05:58:02.529781 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529792 | orchestrator | 2026-02-14 05:58:02.529803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:58:02.529814 | orchestrator | Saturday 14 February 2026 05:58:00 +0000 (0:00:00.778) 0:21:12.375 ***** 2026-02-14 05:58:02.529825 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529835 | orchestrator | 2026-02-14 05:58:02.529846 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:58:02.529857 | orchestrator | Saturday 14 February 2026 05:58:00 +0000 (0:00:00.794) 0:21:13.170 ***** 2026-02-14 05:58:02.529867 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529878 | orchestrator | 2026-02-14 05:58:02.529889 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:58:02.529900 | orchestrator | Saturday 14 February 2026 05:58:01 +0000 (0:00:00.846) 0:21:14.017 ***** 2026-02-14 05:58:02.529911 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:02.529922 | orchestrator | 2026-02-14 05:58:02.529972 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:58:33.860821 | orchestrator | Saturday 14 February 2026 05:58:02 +0000 (0:00:00.824) 0:21:14.841 ***** 2026-02-14 05:58:33.860995 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861014 | orchestrator | 2026-02-14 05:58:33.861027 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:58:33.861039 | orchestrator | Saturday 14 February 2026 05:58:03 +0000 (0:00:00.793) 0:21:15.635 ***** 2026-02-14 05:58:33.861050 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861061 | orchestrator | 2026-02-14 05:58:33.861073 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:58:33.861084 | orchestrator | Saturday 14 February 2026 05:58:04 +0000 (0:00:00.939) 0:21:16.575 ***** 2026-02-14 05:58:33.861095 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861106 | orchestrator | 2026-02-14 05:58:33.861117 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:58:33.861128 | orchestrator | Saturday 14 February 2026 05:58:05 +0000 (0:00:00.801) 0:21:17.376 ***** 2026-02-14 05:58:33.861139 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861151 | orchestrator | 2026-02-14 05:58:33.861162 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:58:33.861173 | orchestrator | Saturday 14 February 2026 05:58:05 +0000 (0:00:00.839) 0:21:18.216 ***** 2026-02-14 05:58:33.861184 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861195 | orchestrator | 2026-02-14 05:58:33.861206 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:58:33.861217 | orchestrator | Saturday 14 February 2026 05:58:06 +0000 (0:00:00.860) 0:21:19.077 ***** 2026-02-14 05:58:33.861228 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861239 | orchestrator | 2026-02-14 05:58:33.861250 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:58:33.861262 | orchestrator | Saturday 14 February 2026 05:58:07 +0000 (0:00:00.807) 0:21:19.885 ***** 2026-02-14 05:58:33.861273 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861284 | orchestrator | 2026-02-14 05:58:33.861295 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:58:33.861332 | orchestrator | Saturday 14 February 2026 05:58:08 +0000 (0:00:00.863) 0:21:20.748 ***** 2026-02-14 05:58:33.861346 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861358 | orchestrator | 2026-02-14 05:58:33.861371 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:58:33.861384 | orchestrator | Saturday 14 February 2026 05:58:09 +0000 (0:00:00.824) 0:21:21.573 ***** 2026-02-14 05:58:33.861396 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861409 | orchestrator | 2026-02-14 05:58:33.861421 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:58:33.861433 | orchestrator | Saturday 14 February 2026 05:58:10 +0000 (0:00:00.794) 0:21:22.367 ***** 2026-02-14 05:58:33.861445 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861458 | orchestrator | 2026-02-14 05:58:33.861470 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:58:33.861483 | orchestrator | Saturday 14 February 2026 05:58:10 +0000 (0:00:00.779) 0:21:23.147 ***** 2026-02-14 05:58:33.861496 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861508 | orchestrator | 2026-02-14 05:58:33.861521 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 05:58:33.861533 | orchestrator | Saturday 14 February 2026 05:58:11 +0000 (0:00:00.779) 0:21:23.927 ***** 2026-02-14 05:58:33.861545 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861558 | orchestrator | 2026-02-14 05:58:33.861571 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 05:58:33.861584 | orchestrator | Saturday 14 February 2026 05:58:12 +0000 (0:00:00.786) 0:21:24.713 ***** 2026-02-14 05:58:33.861597 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861610 | orchestrator | 2026-02-14 05:58:33.861623 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 05:58:33.861636 | orchestrator | Saturday 14 February 2026 05:58:13 +0000 (0:00:00.939) 0:21:25.653 ***** 2026-02-14 05:58:33.861648 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861660 | orchestrator | 2026-02-14 05:58:33.861687 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 05:58:33.861699 | orchestrator | Saturday 14 February 2026 05:58:14 +0000 (0:00:00.825) 0:21:26.478 ***** 2026-02-14 05:58:33.861710 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861721 | orchestrator | 2026-02-14 05:58:33.861732 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 05:58:33.861743 | orchestrator | Saturday 14 February 2026 05:58:15 +0000 (0:00:00.924) 0:21:27.404 ***** 2026-02-14 05:58:33.861754 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861765 | orchestrator | 2026-02-14 05:58:33.861776 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 05:58:33.861787 | orchestrator | Saturday 14 February 2026 05:58:16 +0000 (0:00:00.926) 0:21:28.331 ***** 2026-02-14 05:58:33.861798 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861809 | orchestrator | 2026-02-14 05:58:33.861820 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 05:58:33.861833 | orchestrator | Saturday 14 February 2026 05:58:16 +0000 (0:00:00.772) 0:21:29.103 ***** 2026-02-14 05:58:33.861844 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861855 | orchestrator | 2026-02-14 05:58:33.861866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 05:58:33.861877 | orchestrator | Saturday 14 February 2026 05:58:17 +0000 (0:00:00.836) 0:21:29.939 ***** 2026-02-14 05:58:33.861887 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861899 | orchestrator | 2026-02-14 05:58:33.861910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 05:58:33.861921 | orchestrator | Saturday 14 February 2026 05:58:18 +0000 (0:00:00.853) 0:21:30.793 ***** 2026-02-14 05:58:33.861932 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.861984 | orchestrator | 2026-02-14 05:58:33.862072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 05:58:33.862088 | orchestrator | Saturday 14 February 2026 05:58:19 +0000 (0:00:00.801) 0:21:31.594 ***** 2026-02-14 05:58:33.862099 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862110 | orchestrator | 2026-02-14 05:58:33.862120 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 05:58:33.862131 | orchestrator | Saturday 14 February 2026 05:58:20 +0000 (0:00:00.816) 0:21:32.411 ***** 2026-02-14 05:58:33.862142 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:58:33.862154 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:58:33.862165 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:58:33.862175 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862186 | orchestrator | 2026-02-14 05:58:33.862197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 05:58:33.862208 | orchestrator | Saturday 14 February 2026 05:58:21 +0000 (0:00:01.120) 0:21:33.531 ***** 2026-02-14 05:58:33.862218 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:58:33.862229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:58:33.862240 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:58:33.862250 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862261 | orchestrator | 2026-02-14 05:58:33.862272 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 05:58:33.862282 | orchestrator | Saturday 14 February 2026 05:58:22 +0000 (0:00:01.073) 0:21:34.605 ***** 2026-02-14 05:58:33.862293 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 05:58:33.862304 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 05:58:33.862314 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 05:58:33.862325 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862336 | orchestrator | 2026-02-14 05:58:33.862346 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 05:58:33.862357 | orchestrator | Saturday 14 February 2026 05:58:23 +0000 (0:00:01.193) 0:21:35.799 ***** 2026-02-14 05:58:33.862368 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862379 | orchestrator | 2026-02-14 05:58:33.862389 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 05:58:33.862400 | orchestrator | Saturday 14 February 2026 05:58:24 +0000 (0:00:00.856) 0:21:36.655 ***** 2026-02-14 05:58:33.862412 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-14 05:58:33.862423 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862433 | orchestrator | 2026-02-14 05:58:33.862444 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 05:58:33.862455 | orchestrator | Saturday 14 February 2026 05:58:25 +0000 (0:00:00.929) 0:21:37.585 ***** 2026-02-14 05:58:33.862466 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862476 | orchestrator | 2026-02-14 05:58:33.862487 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 05:58:33.862498 | orchestrator | Saturday 14 February 2026 05:58:26 +0000 (0:00:00.984) 0:21:38.569 ***** 2026-02-14 05:58:33.862509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 05:58:33.862520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 05:58:33.862530 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 05:58:33.862541 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862552 | orchestrator | 2026-02-14 05:58:33.862562 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 05:58:33.862573 | orchestrator | Saturday 14 February 2026 05:58:27 +0000 (0:00:01.064) 0:21:39.634 ***** 2026-02-14 05:58:33.862584 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862603 | orchestrator | 2026-02-14 05:58:33.862614 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 05:58:33.862625 | orchestrator | Saturday 14 February 2026 05:58:28 +0000 (0:00:00.787) 0:21:40.421 ***** 2026-02-14 05:58:33.862636 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862647 | orchestrator | 2026-02-14 05:58:33.862663 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 05:58:33.862674 | orchestrator | Saturday 14 February 2026 05:58:28 +0000 (0:00:00.773) 0:21:41.195 ***** 2026-02-14 05:58:33.862685 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862696 | orchestrator | 2026-02-14 05:58:33.862707 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 05:58:33.862717 | orchestrator | Saturday 14 February 2026 05:58:29 +0000 (0:00:00.817) 0:21:42.012 ***** 2026-02-14 05:58:33.862728 | orchestrator | skipping: [testbed-node-1] 2026-02-14 05:58:33.862739 | orchestrator | 2026-02-14 05:58:33.862750 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-14 05:58:33.862761 | orchestrator | 2026-02-14 05:58:33.862771 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 05:58:33.862782 | orchestrator | Saturday 14 February 2026 05:58:30 +0000 (0:00:01.033) 0:21:43.046 ***** 2026-02-14 05:58:33.862793 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:58:33.862804 | orchestrator | 2026-02-14 05:58:33.862815 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 05:58:33.862825 | orchestrator | Saturday 14 February 2026 05:58:31 +0000 (0:00:00.786) 0:21:43.832 ***** 2026-02-14 05:58:33.862836 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:58:33.862847 | orchestrator | 2026-02-14 05:58:33.862858 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 05:58:33.862868 | orchestrator | Saturday 14 February 2026 05:58:32 +0000 (0:00:00.783) 0:21:44.616 ***** 2026-02-14 05:58:33.862879 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:58:33.862890 | orchestrator | 2026-02-14 05:58:33.862901 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 05:58:33.862911 | orchestrator | Saturday 14 February 2026 05:58:33 +0000 (0:00:00.794) 0:21:45.410 ***** 2026-02-14 05:58:33.862929 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933094 | orchestrator | 2026-02-14 05:59:06.933217 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 05:59:06.933235 | orchestrator | Saturday 14 February 2026 05:58:33 +0000 (0:00:00.764) 0:21:46.175 ***** 2026-02-14 05:59:06.933247 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933259 | orchestrator | 2026-02-14 05:59:06.933270 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 05:59:06.933282 | orchestrator | Saturday 14 February 2026 05:58:34 +0000 (0:00:00.942) 0:21:47.118 ***** 2026-02-14 05:59:06.933294 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933305 | orchestrator | 2026-02-14 05:59:06.933316 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 05:59:06.933327 | orchestrator | Saturday 14 February 2026 05:58:35 +0000 (0:00:00.812) 0:21:47.930 ***** 2026-02-14 05:59:06.933338 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933349 | orchestrator | 2026-02-14 05:59:06.933359 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 05:59:06.933370 | orchestrator | Saturday 14 February 2026 05:58:36 +0000 (0:00:00.805) 0:21:48.736 ***** 2026-02-14 05:59:06.933381 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933392 | orchestrator | 2026-02-14 05:59:06.933402 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 05:59:06.933413 | orchestrator | Saturday 14 February 2026 05:58:37 +0000 (0:00:00.809) 0:21:49.546 ***** 2026-02-14 05:59:06.933424 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933435 | orchestrator | 2026-02-14 05:59:06.933446 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 05:59:06.933485 | orchestrator | Saturday 14 February 2026 05:58:38 +0000 (0:00:00.849) 0:21:50.395 ***** 2026-02-14 05:59:06.933496 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933507 | orchestrator | 2026-02-14 05:59:06.933518 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 05:59:06.933529 | orchestrator | Saturday 14 February 2026 05:58:38 +0000 (0:00:00.804) 0:21:51.199 ***** 2026-02-14 05:59:06.933539 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933552 | orchestrator | 2026-02-14 05:59:06.933565 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 05:59:06.933578 | orchestrator | Saturday 14 February 2026 05:58:39 +0000 (0:00:00.842) 0:21:52.042 ***** 2026-02-14 05:59:06.933591 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933603 | orchestrator | 2026-02-14 05:59:06.933616 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 05:59:06.933628 | orchestrator | Saturday 14 February 2026 05:58:40 +0000 (0:00:00.873) 0:21:52.916 ***** 2026-02-14 05:59:06.933641 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933653 | orchestrator | 2026-02-14 05:59:06.933666 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 05:59:06.933678 | orchestrator | Saturday 14 February 2026 05:58:41 +0000 (0:00:00.785) 0:21:53.701 ***** 2026-02-14 05:59:06.933690 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933703 | orchestrator | 2026-02-14 05:59:06.933715 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 05:59:06.933728 | orchestrator | Saturday 14 February 2026 05:58:42 +0000 (0:00:00.785) 0:21:54.487 ***** 2026-02-14 05:59:06.933740 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933753 | orchestrator | 2026-02-14 05:59:06.933766 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 05:59:06.933778 | orchestrator | Saturday 14 February 2026 05:58:42 +0000 (0:00:00.798) 0:21:55.285 ***** 2026-02-14 05:59:06.933790 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933803 | orchestrator | 2026-02-14 05:59:06.933816 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 05:59:06.933828 | orchestrator | Saturday 14 February 2026 05:58:43 +0000 (0:00:00.783) 0:21:56.069 ***** 2026-02-14 05:59:06.933841 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933853 | orchestrator | 2026-02-14 05:59:06.933866 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 05:59:06.933879 | orchestrator | Saturday 14 February 2026 05:58:44 +0000 (0:00:00.780) 0:21:56.849 ***** 2026-02-14 05:59:06.933892 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.933904 | orchestrator | 2026-02-14 05:59:06.933930 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 05:59:06.933992 | orchestrator | Saturday 14 February 2026 05:58:45 +0000 (0:00:00.948) 0:21:57.798 ***** 2026-02-14 05:59:06.934003 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934014 | orchestrator | 2026-02-14 05:59:06.934085 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 05:59:06.934097 | orchestrator | Saturday 14 February 2026 05:58:46 +0000 (0:00:00.769) 0:21:58.568 ***** 2026-02-14 05:59:06.934108 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934119 | orchestrator | 2026-02-14 05:59:06.934129 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 05:59:06.934140 | orchestrator | Saturday 14 February 2026 05:58:47 +0000 (0:00:00.790) 0:21:59.358 ***** 2026-02-14 05:59:06.934184 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934195 | orchestrator | 2026-02-14 05:59:06.934206 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 05:59:06.934217 | orchestrator | Saturday 14 February 2026 05:58:47 +0000 (0:00:00.828) 0:22:00.187 ***** 2026-02-14 05:59:06.934227 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934238 | orchestrator | 2026-02-14 05:59:06.934249 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 05:59:06.934270 | orchestrator | Saturday 14 February 2026 05:58:48 +0000 (0:00:00.807) 0:22:00.994 ***** 2026-02-14 05:59:06.934281 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934292 | orchestrator | 2026-02-14 05:59:06.934303 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 05:59:06.934314 | orchestrator | Saturday 14 February 2026 05:58:49 +0000 (0:00:00.768) 0:22:01.763 ***** 2026-02-14 05:59:06.934325 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934335 | orchestrator | 2026-02-14 05:59:06.934366 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 05:59:06.934378 | orchestrator | Saturday 14 February 2026 05:58:50 +0000 (0:00:00.837) 0:22:02.601 ***** 2026-02-14 05:59:06.934389 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934400 | orchestrator | 2026-02-14 05:59:06.934410 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 05:59:06.934421 | orchestrator | Saturday 14 February 2026 05:58:51 +0000 (0:00:00.836) 0:22:03.437 ***** 2026-02-14 05:59:06.934431 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934442 | orchestrator | 2026-02-14 05:59:06.934453 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 05:59:06.934464 | orchestrator | Saturday 14 February 2026 05:58:51 +0000 (0:00:00.790) 0:22:04.228 ***** 2026-02-14 05:59:06.934474 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934485 | orchestrator | 2026-02-14 05:59:06.934496 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 05:59:06.934507 | orchestrator | Saturday 14 February 2026 05:58:52 +0000 (0:00:00.790) 0:22:05.018 ***** 2026-02-14 05:59:06.934517 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934528 | orchestrator | 2026-02-14 05:59:06.934539 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 05:59:06.934550 | orchestrator | Saturday 14 February 2026 05:58:53 +0000 (0:00:00.786) 0:22:05.805 ***** 2026-02-14 05:59:06.934560 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934571 | orchestrator | 2026-02-14 05:59:06.934582 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 05:59:06.934592 | orchestrator | Saturday 14 February 2026 05:58:54 +0000 (0:00:00.805) 0:22:06.610 ***** 2026-02-14 05:59:06.934603 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934613 | orchestrator | 2026-02-14 05:59:06.934624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 05:59:06.934635 | orchestrator | Saturday 14 February 2026 05:58:55 +0000 (0:00:00.890) 0:22:07.501 ***** 2026-02-14 05:59:06.934646 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934656 | orchestrator | 2026-02-14 05:59:06.934667 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 05:59:06.934678 | orchestrator | Saturday 14 February 2026 05:58:56 +0000 (0:00:00.839) 0:22:08.340 ***** 2026-02-14 05:59:06.934688 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934699 | orchestrator | 2026-02-14 05:59:06.934709 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 05:59:06.934720 | orchestrator | Saturday 14 February 2026 05:58:56 +0000 (0:00:00.777) 0:22:09.118 ***** 2026-02-14 05:59:06.934731 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934741 | orchestrator | 2026-02-14 05:59:06.934752 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 05:59:06.934763 | orchestrator | Saturday 14 February 2026 05:58:57 +0000 (0:00:00.796) 0:22:09.915 ***** 2026-02-14 05:59:06.934773 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934784 | orchestrator | 2026-02-14 05:59:06.934795 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 05:59:06.934805 | orchestrator | Saturday 14 February 2026 05:58:58 +0000 (0:00:00.789) 0:22:10.705 ***** 2026-02-14 05:59:06.934816 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934827 | orchestrator | 2026-02-14 05:59:06.934837 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 05:59:06.934855 | orchestrator | Saturday 14 February 2026 05:58:59 +0000 (0:00:00.842) 0:22:11.548 ***** 2026-02-14 05:59:06.934866 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934876 | orchestrator | 2026-02-14 05:59:06.934887 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 05:59:06.934898 | orchestrator | Saturday 14 February 2026 05:59:00 +0000 (0:00:00.837) 0:22:12.385 ***** 2026-02-14 05:59:06.934908 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934919 | orchestrator | 2026-02-14 05:59:06.934930 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 05:59:06.934966 | orchestrator | Saturday 14 February 2026 05:59:00 +0000 (0:00:00.835) 0:22:13.221 ***** 2026-02-14 05:59:06.934977 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.934988 | orchestrator | 2026-02-14 05:59:06.934999 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 05:59:06.935016 | orchestrator | Saturday 14 February 2026 05:59:01 +0000 (0:00:00.826) 0:22:14.048 ***** 2026-02-14 05:59:06.935027 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935038 | orchestrator | 2026-02-14 05:59:06.935049 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 05:59:06.935061 | orchestrator | Saturday 14 February 2026 05:59:02 +0000 (0:00:00.855) 0:22:14.903 ***** 2026-02-14 05:59:06.935072 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935083 | orchestrator | 2026-02-14 05:59:06.935094 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 05:59:06.935105 | orchestrator | Saturday 14 February 2026 05:59:03 +0000 (0:00:00.850) 0:22:15.754 ***** 2026-02-14 05:59:06.935116 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935127 | orchestrator | 2026-02-14 05:59:06.935138 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 05:59:06.935148 | orchestrator | Saturday 14 February 2026 05:59:04 +0000 (0:00:00.831) 0:22:16.585 ***** 2026-02-14 05:59:06.935159 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935170 | orchestrator | 2026-02-14 05:59:06.935180 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 05:59:06.935191 | orchestrator | Saturday 14 February 2026 05:59:05 +0000 (0:00:00.980) 0:22:17.566 ***** 2026-02-14 05:59:06.935202 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935213 | orchestrator | 2026-02-14 05:59:06.935223 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 05:59:06.935234 | orchestrator | Saturday 14 February 2026 05:59:06 +0000 (0:00:00.849) 0:22:18.416 ***** 2026-02-14 05:59:06.935244 | orchestrator | skipping: [testbed-node-2] 2026-02-14 05:59:06.935255 | orchestrator | 2026-02-14 05:59:06.935272 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:00:00.836472 | orchestrator | Saturday 14 February 2026 05:59:06 +0000 (0:00:00.829) 0:22:19.246 ***** 2026-02-14 06:00:00.836589 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836605 | orchestrator | 2026-02-14 06:00:00.836618 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:00:00.836629 | orchestrator | Saturday 14 February 2026 05:59:07 +0000 (0:00:00.815) 0:22:20.062 ***** 2026-02-14 06:00:00.836640 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836651 | orchestrator | 2026-02-14 06:00:00.836663 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:00:00.836674 | orchestrator | Saturday 14 February 2026 05:59:08 +0000 (0:00:00.954) 0:22:21.016 ***** 2026-02-14 06:00:00.836685 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836696 | orchestrator | 2026-02-14 06:00:00.836707 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:00:00.836717 | orchestrator | Saturday 14 February 2026 05:59:09 +0000 (0:00:00.832) 0:22:21.849 ***** 2026-02-14 06:00:00.836729 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836765 | orchestrator | 2026-02-14 06:00:00.836777 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:00:00.836788 | orchestrator | Saturday 14 February 2026 05:59:10 +0000 (0:00:01.046) 0:22:22.895 ***** 2026-02-14 06:00:00.836799 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836810 | orchestrator | 2026-02-14 06:00:00.836820 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:00:00.836831 | orchestrator | Saturday 14 February 2026 05:59:11 +0000 (0:00:00.829) 0:22:23.725 ***** 2026-02-14 06:00:00.836841 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836852 | orchestrator | 2026-02-14 06:00:00.836864 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:00:00.836876 | orchestrator | Saturday 14 February 2026 05:59:12 +0000 (0:00:00.796) 0:22:24.521 ***** 2026-02-14 06:00:00.836886 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.836897 | orchestrator | 2026-02-14 06:00:00.836908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:00:00.836919 | orchestrator | Saturday 14 February 2026 05:59:12 +0000 (0:00:00.786) 0:22:25.307 ***** 2026-02-14 06:00:00.836984 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837004 | orchestrator | 2026-02-14 06:00:00.837025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:00:00.837040 | orchestrator | Saturday 14 February 2026 05:59:13 +0000 (0:00:00.809) 0:22:26.117 ***** 2026-02-14 06:00:00.837053 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837066 | orchestrator | 2026-02-14 06:00:00.837079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:00:00.837091 | orchestrator | Saturday 14 February 2026 05:59:14 +0000 (0:00:00.809) 0:22:26.926 ***** 2026-02-14 06:00:00.837103 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837116 | orchestrator | 2026-02-14 06:00:00.837128 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:00:00.837141 | orchestrator | Saturday 14 February 2026 05:59:15 +0000 (0:00:00.853) 0:22:27.780 ***** 2026-02-14 06:00:00.837154 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:00:00.837168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:00:00.837180 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:00:00.837190 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837201 | orchestrator | 2026-02-14 06:00:00.837212 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:00:00.837223 | orchestrator | Saturday 14 February 2026 05:59:17 +0000 (0:00:01.759) 0:22:29.539 ***** 2026-02-14 06:00:00.837234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:00:00.837244 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:00:00.837255 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:00:00.837266 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837276 | orchestrator | 2026-02-14 06:00:00.837301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:00:00.837313 | orchestrator | Saturday 14 February 2026 05:59:18 +0000 (0:00:01.183) 0:22:30.723 ***** 2026-02-14 06:00:00.837324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:00:00.837334 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:00:00.837345 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:00:00.837356 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837367 | orchestrator | 2026-02-14 06:00:00.837377 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:00:00.837388 | orchestrator | Saturday 14 February 2026 05:59:19 +0000 (0:00:01.179) 0:22:31.903 ***** 2026-02-14 06:00:00.837399 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837418 | orchestrator | 2026-02-14 06:00:00.837429 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:00:00.837440 | orchestrator | Saturday 14 February 2026 05:59:20 +0000 (0:00:00.807) 0:22:32.710 ***** 2026-02-14 06:00:00.837452 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-14 06:00:00.837463 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837474 | orchestrator | 2026-02-14 06:00:00.837485 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:00:00.837495 | orchestrator | Saturday 14 February 2026 05:59:21 +0000 (0:00:00.879) 0:22:33.589 ***** 2026-02-14 06:00:00.837506 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837517 | orchestrator | 2026-02-14 06:00:00.837528 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:00:00.837538 | orchestrator | Saturday 14 February 2026 05:59:22 +0000 (0:00:00.864) 0:22:34.453 ***** 2026-02-14 06:00:00.837550 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 06:00:00.837579 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 06:00:00.837590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 06:00:00.837601 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837612 | orchestrator | 2026-02-14 06:00:00.837623 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 06:00:00.837634 | orchestrator | Saturday 14 February 2026 05:59:23 +0000 (0:00:01.197) 0:22:35.651 ***** 2026-02-14 06:00:00.837645 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837655 | orchestrator | 2026-02-14 06:00:00.837666 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 06:00:00.837677 | orchestrator | Saturday 14 February 2026 05:59:24 +0000 (0:00:00.792) 0:22:36.443 ***** 2026-02-14 06:00:00.837688 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837699 | orchestrator | 2026-02-14 06:00:00.837709 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 06:00:00.837720 | orchestrator | Saturday 14 February 2026 05:59:24 +0000 (0:00:00.831) 0:22:37.275 ***** 2026-02-14 06:00:00.837731 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837742 | orchestrator | 2026-02-14 06:00:00.837753 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 06:00:00.837763 | orchestrator | Saturday 14 February 2026 05:59:25 +0000 (0:00:00.931) 0:22:38.206 ***** 2026-02-14 06:00:00.837774 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:00:00.837785 | orchestrator | 2026-02-14 06:00:00.837796 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-14 06:00:00.837807 | orchestrator | 2026-02-14 06:00:00.837818 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 06:00:00.837829 | orchestrator | Saturday 14 February 2026 05:59:27 +0000 (0:00:01.469) 0:22:39.676 ***** 2026-02-14 06:00:00.837840 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:00:00.837851 | orchestrator | 2026-02-14 06:00:00.837861 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-14 06:00:00.837872 | orchestrator | Saturday 14 February 2026 05:59:40 +0000 (0:00:13.161) 0:22:52.837 ***** 2026-02-14 06:00:00.837883 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:00:00.837894 | orchestrator | 2026-02-14 06:00:00.837905 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:00:00.837915 | orchestrator | Saturday 14 February 2026 05:59:43 +0000 (0:00:02.549) 0:22:55.387 ***** 2026-02-14 06:00:00.837952 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-14 06:00:00.837965 | orchestrator | 2026-02-14 06:00:00.837976 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:00:00.837986 | orchestrator | Saturday 14 February 2026 05:59:44 +0000 (0:00:01.162) 0:22:56.549 ***** 2026-02-14 06:00:00.837997 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838008 | orchestrator | 2026-02-14 06:00:00.838073 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:00:00.838130 | orchestrator | Saturday 14 February 2026 05:59:45 +0000 (0:00:01.511) 0:22:58.066 ***** 2026-02-14 06:00:00.838141 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838152 | orchestrator | 2026-02-14 06:00:00.838163 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:00:00.838173 | orchestrator | Saturday 14 February 2026 05:59:46 +0000 (0:00:01.203) 0:22:59.270 ***** 2026-02-14 06:00:00.838184 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838194 | orchestrator | 2026-02-14 06:00:00.838205 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:00:00.838216 | orchestrator | Saturday 14 February 2026 05:59:48 +0000 (0:00:01.490) 0:23:00.761 ***** 2026-02-14 06:00:00.838226 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838237 | orchestrator | 2026-02-14 06:00:00.838247 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:00:00.838258 | orchestrator | Saturday 14 February 2026 05:59:49 +0000 (0:00:01.124) 0:23:01.886 ***** 2026-02-14 06:00:00.838268 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838279 | orchestrator | 2026-02-14 06:00:00.838290 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:00:00.838300 | orchestrator | Saturday 14 February 2026 05:59:50 +0000 (0:00:01.232) 0:23:03.119 ***** 2026-02-14 06:00:00.838317 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838328 | orchestrator | 2026-02-14 06:00:00.838338 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:00:00.838350 | orchestrator | Saturday 14 February 2026 05:59:51 +0000 (0:00:01.167) 0:23:04.286 ***** 2026-02-14 06:00:00.838360 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:00.838371 | orchestrator | 2026-02-14 06:00:00.838381 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:00:00.838392 | orchestrator | Saturday 14 February 2026 05:59:53 +0000 (0:00:01.154) 0:23:05.441 ***** 2026-02-14 06:00:00.838403 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838413 | orchestrator | 2026-02-14 06:00:00.838424 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:00:00.838435 | orchestrator | Saturday 14 February 2026 05:59:54 +0000 (0:00:01.115) 0:23:06.556 ***** 2026-02-14 06:00:00.838445 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:00:00.838456 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:00:00.838467 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:00:00.838478 | orchestrator | 2026-02-14 06:00:00.838488 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:00:00.838499 | orchestrator | Saturday 14 February 2026 05:59:56 +0000 (0:00:02.184) 0:23:08.741 ***** 2026-02-14 06:00:00.838510 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:00.838520 | orchestrator | 2026-02-14 06:00:00.838531 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:00:00.838542 | orchestrator | Saturday 14 February 2026 05:59:57 +0000 (0:00:01.285) 0:23:10.027 ***** 2026-02-14 06:00:00.838553 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:00:00.838572 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:00:24.871305 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:00:24.871424 | orchestrator | 2026-02-14 06:00:24.871440 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:00:24.871454 | orchestrator | Saturday 14 February 2026 06:00:00 +0000 (0:00:03.124) 0:23:13.152 ***** 2026-02-14 06:00:24.871466 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 06:00:24.871477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 06:00:24.871488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 06:00:24.871524 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.871536 | orchestrator | 2026-02-14 06:00:24.871547 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:00:24.871558 | orchestrator | Saturday 14 February 2026 06:00:02 +0000 (0:00:01.548) 0:23:14.701 ***** 2026-02-14 06:00:24.871570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871585 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871596 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871607 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.871618 | orchestrator | 2026-02-14 06:00:24.871629 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:00:24.871640 | orchestrator | Saturday 14 February 2026 06:00:04 +0000 (0:00:01.808) 0:23:16.509 ***** 2026-02-14 06:00:24.871653 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871692 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:24.871703 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.871714 | orchestrator | 2026-02-14 06:00:24.871725 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:00:24.871736 | orchestrator | Saturday 14 February 2026 06:00:05 +0000 (0:00:01.276) 0:23:17.785 ***** 2026-02-14 06:00:24.871750 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 05:59:58.273790', 'end': '2026-02-14 05:59:58.328861', 'delta': '0:00:00.055071', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:00:24.871784 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 05:59:58.915666', 'end': '2026-02-14 05:59:58.968700', 'delta': '0:00:00.053034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:00:24.871807 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 05:59:59.516952', 'end': '2026-02-14 05:59:59.571951', 'delta': '0:00:00.054999', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:00:24.871818 | orchestrator | 2026-02-14 06:00:24.871830 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:00:24.871843 | orchestrator | Saturday 14 February 2026 06:00:06 +0000 (0:00:01.253) 0:23:19.039 ***** 2026-02-14 06:00:24.871856 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:24.871870 | orchestrator | 2026-02-14 06:00:24.871883 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:00:24.871896 | orchestrator | Saturday 14 February 2026 06:00:08 +0000 (0:00:01.315) 0:23:20.354 ***** 2026-02-14 06:00:24.871908 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.871992 | orchestrator | 2026-02-14 06:00:24.872007 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:00:24.872021 | orchestrator | Saturday 14 February 2026 06:00:09 +0000 (0:00:01.223) 0:23:21.577 ***** 2026-02-14 06:00:24.872033 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:24.872045 | orchestrator | 2026-02-14 06:00:24.872058 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:00:24.872070 | orchestrator | Saturday 14 February 2026 06:00:10 +0000 (0:00:01.201) 0:23:22.778 ***** 2026-02-14 06:00:24.872083 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:24.872095 | orchestrator | 2026-02-14 06:00:24.872108 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:00:24.872120 | orchestrator | Saturday 14 February 2026 06:00:12 +0000 (0:00:01.982) 0:23:24.761 ***** 2026-02-14 06:00:24.872133 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:24.872145 | orchestrator | 2026-02-14 06:00:24.872158 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:00:24.872170 | orchestrator | Saturday 14 February 2026 06:00:13 +0000 (0:00:01.189) 0:23:25.951 ***** 2026-02-14 06:00:24.872183 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872196 | orchestrator | 2026-02-14 06:00:24.872206 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:00:24.872217 | orchestrator | Saturday 14 February 2026 06:00:14 +0000 (0:00:01.228) 0:23:27.179 ***** 2026-02-14 06:00:24.872228 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872239 | orchestrator | 2026-02-14 06:00:24.872249 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:00:24.872260 | orchestrator | Saturday 14 February 2026 06:00:16 +0000 (0:00:01.866) 0:23:29.045 ***** 2026-02-14 06:00:24.872277 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872288 | orchestrator | 2026-02-14 06:00:24.872299 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:00:24.872310 | orchestrator | Saturday 14 February 2026 06:00:17 +0000 (0:00:01.161) 0:23:30.207 ***** 2026-02-14 06:00:24.872328 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872339 | orchestrator | 2026-02-14 06:00:24.872350 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:00:24.872360 | orchestrator | Saturday 14 February 2026 06:00:19 +0000 (0:00:01.183) 0:23:31.391 ***** 2026-02-14 06:00:24.872371 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872382 | orchestrator | 2026-02-14 06:00:24.872392 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:00:24.872403 | orchestrator | Saturday 14 February 2026 06:00:20 +0000 (0:00:01.119) 0:23:32.510 ***** 2026-02-14 06:00:24.872414 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872425 | orchestrator | 2026-02-14 06:00:24.872435 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:00:24.872446 | orchestrator | Saturday 14 February 2026 06:00:21 +0000 (0:00:01.118) 0:23:33.629 ***** 2026-02-14 06:00:24.872457 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872467 | orchestrator | 2026-02-14 06:00:24.872478 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:00:24.872489 | orchestrator | Saturday 14 February 2026 06:00:22 +0000 (0:00:01.221) 0:23:34.850 ***** 2026-02-14 06:00:24.872500 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872511 | orchestrator | 2026-02-14 06:00:24.872521 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:00:24.872532 | orchestrator | Saturday 14 February 2026 06:00:23 +0000 (0:00:01.183) 0:23:36.034 ***** 2026-02-14 06:00:24.872543 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:24.872554 | orchestrator | 2026-02-14 06:00:24.872572 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:00:27.482366 | orchestrator | Saturday 14 February 2026 06:00:24 +0000 (0:00:01.151) 0:23:37.186 ***** 2026-02-14 06:00:27.482466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:00:27.482517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:00:27.482625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:00:27.482652 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:27.482664 | orchestrator | 2026-02-14 06:00:27.482675 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:00:27.482685 | orchestrator | Saturday 14 February 2026 06:00:26 +0000 (0:00:01.324) 0:23:38.510 ***** 2026-02-14 06:00:27.482701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:27.482714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:27.482732 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.446740 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.446855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.446872 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.446973 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.447036 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.447053 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.447080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:00:38.447099 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:38.447118 | orchestrator | 2026-02-14 06:00:38.447136 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:00:38.447160 | orchestrator | Saturday 14 February 2026 06:00:27 +0000 (0:00:01.292) 0:23:39.803 ***** 2026-02-14 06:00:38.447181 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:38.447198 | orchestrator | 2026-02-14 06:00:38.447213 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:00:38.447229 | orchestrator | Saturday 14 February 2026 06:00:29 +0000 (0:00:01.545) 0:23:41.349 ***** 2026-02-14 06:00:38.447244 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:38.447259 | orchestrator | 2026-02-14 06:00:38.447277 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:00:38.447293 | orchestrator | Saturday 14 February 2026 06:00:30 +0000 (0:00:01.156) 0:23:42.505 ***** 2026-02-14 06:00:38.447310 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:00:38.447329 | orchestrator | 2026-02-14 06:00:38.447354 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:00:38.447373 | orchestrator | Saturday 14 February 2026 06:00:31 +0000 (0:00:01.532) 0:23:44.038 ***** 2026-02-14 06:00:38.447387 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:38.447405 | orchestrator | 2026-02-14 06:00:38.447421 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:00:38.447437 | orchestrator | Saturday 14 February 2026 06:00:32 +0000 (0:00:01.267) 0:23:45.306 ***** 2026-02-14 06:00:38.447454 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:38.447470 | orchestrator | 2026-02-14 06:00:38.447487 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:00:38.447505 | orchestrator | Saturday 14 February 2026 06:00:34 +0000 (0:00:01.273) 0:23:46.579 ***** 2026-02-14 06:00:38.447521 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:38.447537 | orchestrator | 2026-02-14 06:00:38.447552 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:00:38.447568 | orchestrator | Saturday 14 February 2026 06:00:35 +0000 (0:00:01.208) 0:23:47.788 ***** 2026-02-14 06:00:38.447583 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:00:38.447599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 06:00:38.447616 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 06:00:38.447632 | orchestrator | 2026-02-14 06:00:38.447648 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:00:38.447665 | orchestrator | Saturday 14 February 2026 06:00:37 +0000 (0:00:01.788) 0:23:49.577 ***** 2026-02-14 06:00:38.447681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 06:00:38.447698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 06:00:38.447715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 06:00:38.447731 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:00:38.447747 | orchestrator | 2026-02-14 06:00:38.447777 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:01:24.053374 | orchestrator | Saturday 14 February 2026 06:00:38 +0000 (0:00:01.178) 0:23:50.755 ***** 2026-02-14 06:01:24.053469 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053479 | orchestrator | 2026-02-14 06:01:24.053487 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:01:24.053514 | orchestrator | Saturday 14 February 2026 06:00:39 +0000 (0:00:01.196) 0:23:51.952 ***** 2026-02-14 06:01:24.053522 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:01:24.053528 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:01:24.053535 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:01:24.053541 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:01:24.053547 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:01:24.053554 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:01:24.053560 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:01:24.053566 | orchestrator | 2026-02-14 06:01:24.053573 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:01:24.053579 | orchestrator | Saturday 14 February 2026 06:00:41 +0000 (0:00:01.965) 0:23:53.917 ***** 2026-02-14 06:01:24.053585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:01:24.053591 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:01:24.053597 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:01:24.053603 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:01:24.053609 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:01:24.053615 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:01:24.053621 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:01:24.053627 | orchestrator | 2026-02-14 06:01:24.053633 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:01:24.053639 | orchestrator | Saturday 14 February 2026 06:00:44 +0000 (0:00:02.669) 0:23:56.587 ***** 2026-02-14 06:01:24.053645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-14 06:01:24.053653 | orchestrator | 2026-02-14 06:01:24.053659 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:01:24.053665 | orchestrator | Saturday 14 February 2026 06:00:45 +0000 (0:00:01.108) 0:23:57.696 ***** 2026-02-14 06:01:24.053671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-14 06:01:24.053677 | orchestrator | 2026-02-14 06:01:24.053683 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:01:24.053689 | orchestrator | Saturday 14 February 2026 06:00:46 +0000 (0:00:01.142) 0:23:58.838 ***** 2026-02-14 06:01:24.053695 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.053701 | orchestrator | 2026-02-14 06:01:24.053708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:01:24.053714 | orchestrator | Saturday 14 February 2026 06:00:48 +0000 (0:00:01.696) 0:24:00.535 ***** 2026-02-14 06:01:24.053731 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053737 | orchestrator | 2026-02-14 06:01:24.053744 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:01:24.053761 | orchestrator | Saturday 14 February 2026 06:00:49 +0000 (0:00:01.126) 0:24:01.662 ***** 2026-02-14 06:01:24.053768 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053775 | orchestrator | 2026-02-14 06:01:24.053781 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:01:24.053787 | orchestrator | Saturday 14 February 2026 06:00:50 +0000 (0:00:01.168) 0:24:02.830 ***** 2026-02-14 06:01:24.053793 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053800 | orchestrator | 2026-02-14 06:01:24.053806 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:01:24.053818 | orchestrator | Saturday 14 February 2026 06:00:51 +0000 (0:00:01.156) 0:24:03.987 ***** 2026-02-14 06:01:24.053824 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.053830 | orchestrator | 2026-02-14 06:01:24.053837 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:01:24.053843 | orchestrator | Saturday 14 February 2026 06:00:53 +0000 (0:00:01.656) 0:24:05.643 ***** 2026-02-14 06:01:24.053849 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053855 | orchestrator | 2026-02-14 06:01:24.053861 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:01:24.053867 | orchestrator | Saturday 14 February 2026 06:00:54 +0000 (0:00:01.153) 0:24:06.797 ***** 2026-02-14 06:01:24.053874 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.053880 | orchestrator | 2026-02-14 06:01:24.053886 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:01:24.053892 | orchestrator | Saturday 14 February 2026 06:00:55 +0000 (0:00:01.233) 0:24:08.030 ***** 2026-02-14 06:01:24.053899 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.053905 | orchestrator | 2026-02-14 06:01:24.053936 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:01:24.053947 | orchestrator | Saturday 14 February 2026 06:00:57 +0000 (0:00:01.730) 0:24:09.761 ***** 2026-02-14 06:01:24.053959 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.053970 | orchestrator | 2026-02-14 06:01:24.053980 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:01:24.054006 | orchestrator | Saturday 14 February 2026 06:00:59 +0000 (0:00:01.597) 0:24:11.359 ***** 2026-02-14 06:01:24.054061 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054071 | orchestrator | 2026-02-14 06:01:24.054079 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:01:24.054086 | orchestrator | Saturday 14 February 2026 06:01:00 +0000 (0:00:01.166) 0:24:12.525 ***** 2026-02-14 06:01:24.054094 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.054101 | orchestrator | 2026-02-14 06:01:24.054109 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:01:24.054116 | orchestrator | Saturday 14 February 2026 06:01:01 +0000 (0:00:01.223) 0:24:13.749 ***** 2026-02-14 06:01:24.054138 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054146 | orchestrator | 2026-02-14 06:01:24.054161 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:01:24.054168 | orchestrator | Saturday 14 February 2026 06:01:02 +0000 (0:00:01.183) 0:24:14.933 ***** 2026-02-14 06:01:24.054175 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054182 | orchestrator | 2026-02-14 06:01:24.054190 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:01:24.054197 | orchestrator | Saturday 14 February 2026 06:01:03 +0000 (0:00:01.145) 0:24:16.078 ***** 2026-02-14 06:01:24.054204 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054211 | orchestrator | 2026-02-14 06:01:24.054219 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:01:24.054227 | orchestrator | Saturday 14 February 2026 06:01:05 +0000 (0:00:01.312) 0:24:17.390 ***** 2026-02-14 06:01:24.054234 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054241 | orchestrator | 2026-02-14 06:01:24.054248 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:01:24.054256 | orchestrator | Saturday 14 February 2026 06:01:06 +0000 (0:00:01.193) 0:24:18.584 ***** 2026-02-14 06:01:24.054263 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054273 | orchestrator | 2026-02-14 06:01:24.054284 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:01:24.054294 | orchestrator | Saturday 14 February 2026 06:01:07 +0000 (0:00:01.324) 0:24:19.909 ***** 2026-02-14 06:01:24.054304 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.054315 | orchestrator | 2026-02-14 06:01:24.054324 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:01:24.054341 | orchestrator | Saturday 14 February 2026 06:01:08 +0000 (0:00:01.162) 0:24:21.071 ***** 2026-02-14 06:01:24.054351 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.054362 | orchestrator | 2026-02-14 06:01:24.054372 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:01:24.054381 | orchestrator | Saturday 14 February 2026 06:01:09 +0000 (0:00:01.176) 0:24:22.247 ***** 2026-02-14 06:01:24.054392 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:01:24.054402 | orchestrator | 2026-02-14 06:01:24.054413 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:01:24.054423 | orchestrator | Saturday 14 February 2026 06:01:11 +0000 (0:00:01.208) 0:24:23.456 ***** 2026-02-14 06:01:24.054434 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054444 | orchestrator | 2026-02-14 06:01:24.054455 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:01:24.054466 | orchestrator | Saturday 14 February 2026 06:01:12 +0000 (0:00:01.172) 0:24:24.629 ***** 2026-02-14 06:01:24.054476 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054485 | orchestrator | 2026-02-14 06:01:24.054491 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:01:24.054497 | orchestrator | Saturday 14 February 2026 06:01:13 +0000 (0:00:01.110) 0:24:25.740 ***** 2026-02-14 06:01:24.054504 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054510 | orchestrator | 2026-02-14 06:01:24.054516 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:01:24.054522 | orchestrator | Saturday 14 February 2026 06:01:14 +0000 (0:00:01.156) 0:24:26.897 ***** 2026-02-14 06:01:24.054528 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054534 | orchestrator | 2026-02-14 06:01:24.054547 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:01:24.054553 | orchestrator | Saturday 14 February 2026 06:01:15 +0000 (0:00:01.201) 0:24:28.098 ***** 2026-02-14 06:01:24.054559 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054566 | orchestrator | 2026-02-14 06:01:24.054572 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:01:24.054578 | orchestrator | Saturday 14 February 2026 06:01:16 +0000 (0:00:01.169) 0:24:29.267 ***** 2026-02-14 06:01:24.054584 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054590 | orchestrator | 2026-02-14 06:01:24.054597 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:01:24.054603 | orchestrator | Saturday 14 February 2026 06:01:18 +0000 (0:00:01.156) 0:24:30.424 ***** 2026-02-14 06:01:24.054611 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054621 | orchestrator | 2026-02-14 06:01:24.054631 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:01:24.054641 | orchestrator | Saturday 14 February 2026 06:01:19 +0000 (0:00:01.152) 0:24:31.576 ***** 2026-02-14 06:01:24.054663 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054673 | orchestrator | 2026-02-14 06:01:24.054683 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:01:24.054693 | orchestrator | Saturday 14 February 2026 06:01:20 +0000 (0:00:01.249) 0:24:32.826 ***** 2026-02-14 06:01:24.054704 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054715 | orchestrator | 2026-02-14 06:01:24.054725 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:01:24.054735 | orchestrator | Saturday 14 February 2026 06:01:21 +0000 (0:00:01.159) 0:24:33.986 ***** 2026-02-14 06:01:24.054746 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054757 | orchestrator | 2026-02-14 06:01:24.054767 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:01:24.054779 | orchestrator | Saturday 14 February 2026 06:01:22 +0000 (0:00:01.176) 0:24:35.162 ***** 2026-02-14 06:01:24.054789 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:01:24.054799 | orchestrator | 2026-02-14 06:01:24.054876 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:02:14.272532 | orchestrator | Saturday 14 February 2026 06:01:24 +0000 (0:00:01.207) 0:24:36.370 ***** 2026-02-14 06:02:14.272661 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.272688 | orchestrator | 2026-02-14 06:02:14.272709 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:02:14.272729 | orchestrator | Saturday 14 February 2026 06:01:25 +0000 (0:00:01.132) 0:24:37.503 ***** 2026-02-14 06:02:14.272748 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.272769 | orchestrator | 2026-02-14 06:02:14.272790 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:02:14.272809 | orchestrator | Saturday 14 February 2026 06:01:27 +0000 (0:00:01.982) 0:24:39.485 ***** 2026-02-14 06:02:14.272825 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.272836 | orchestrator | 2026-02-14 06:02:14.272847 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:02:14.272858 | orchestrator | Saturday 14 February 2026 06:01:29 +0000 (0:00:02.468) 0:24:41.954 ***** 2026-02-14 06:02:14.272868 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-14 06:02:14.272880 | orchestrator | 2026-02-14 06:02:14.272891 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:02:14.272933 | orchestrator | Saturday 14 February 2026 06:01:30 +0000 (0:00:01.157) 0:24:43.112 ***** 2026-02-14 06:02:14.272947 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.272958 | orchestrator | 2026-02-14 06:02:14.272969 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:02:14.272980 | orchestrator | Saturday 14 February 2026 06:01:31 +0000 (0:00:01.216) 0:24:44.329 ***** 2026-02-14 06:02:14.272992 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273002 | orchestrator | 2026-02-14 06:02:14.273014 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:02:14.273027 | orchestrator | Saturday 14 February 2026 06:01:33 +0000 (0:00:01.158) 0:24:45.487 ***** 2026-02-14 06:02:14.273048 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:02:14.273069 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:02:14.273089 | orchestrator | 2026-02-14 06:02:14.273109 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:02:14.273129 | orchestrator | Saturday 14 February 2026 06:01:35 +0000 (0:00:01.882) 0:24:47.370 ***** 2026-02-14 06:02:14.273174 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.273188 | orchestrator | 2026-02-14 06:02:14.273202 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:02:14.273215 | orchestrator | Saturday 14 February 2026 06:01:36 +0000 (0:00:01.588) 0:24:48.958 ***** 2026-02-14 06:02:14.273227 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273239 | orchestrator | 2026-02-14 06:02:14.273252 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:02:14.273264 | orchestrator | Saturday 14 February 2026 06:01:37 +0000 (0:00:01.258) 0:24:50.216 ***** 2026-02-14 06:02:14.273276 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273289 | orchestrator | 2026-02-14 06:02:14.273302 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:02:14.273314 | orchestrator | Saturday 14 February 2026 06:01:39 +0000 (0:00:01.259) 0:24:51.476 ***** 2026-02-14 06:02:14.273326 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273339 | orchestrator | 2026-02-14 06:02:14.273352 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:02:14.273364 | orchestrator | Saturday 14 February 2026 06:01:40 +0000 (0:00:01.134) 0:24:52.611 ***** 2026-02-14 06:02:14.273377 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-14 06:02:14.273390 | orchestrator | 2026-02-14 06:02:14.273427 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:02:14.273478 | orchestrator | Saturday 14 February 2026 06:01:41 +0000 (0:00:01.184) 0:24:53.796 ***** 2026-02-14 06:02:14.273500 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.273518 | orchestrator | 2026-02-14 06:02:14.273537 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:02:14.273549 | orchestrator | Saturday 14 February 2026 06:01:43 +0000 (0:00:01.759) 0:24:55.556 ***** 2026-02-14 06:02:14.273559 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:02:14.273570 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:02:14.273580 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:02:14.273591 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273608 | orchestrator | 2026-02-14 06:02:14.273635 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:02:14.273656 | orchestrator | Saturday 14 February 2026 06:01:44 +0000 (0:00:01.168) 0:24:56.724 ***** 2026-02-14 06:02:14.273673 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273691 | orchestrator | 2026-02-14 06:02:14.273707 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:02:14.273724 | orchestrator | Saturday 14 February 2026 06:01:45 +0000 (0:00:01.271) 0:24:57.995 ***** 2026-02-14 06:02:14.273741 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273757 | orchestrator | 2026-02-14 06:02:14.273775 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:02:14.273793 | orchestrator | Saturday 14 February 2026 06:01:46 +0000 (0:00:01.160) 0:24:59.156 ***** 2026-02-14 06:02:14.273811 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273831 | orchestrator | 2026-02-14 06:02:14.273849 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:02:14.273868 | orchestrator | Saturday 14 February 2026 06:01:48 +0000 (0:00:01.247) 0:25:00.404 ***** 2026-02-14 06:02:14.273882 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273893 | orchestrator | 2026-02-14 06:02:14.273961 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:02:14.273974 | orchestrator | Saturday 14 February 2026 06:01:49 +0000 (0:00:01.166) 0:25:01.570 ***** 2026-02-14 06:02:14.273985 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.273995 | orchestrator | 2026-02-14 06:02:14.274006 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:02:14.274073 | orchestrator | Saturday 14 February 2026 06:01:50 +0000 (0:00:01.152) 0:25:02.723 ***** 2026-02-14 06:02:14.274085 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.274095 | orchestrator | 2026-02-14 06:02:14.274106 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:02:14.274117 | orchestrator | Saturday 14 February 2026 06:01:53 +0000 (0:00:02.630) 0:25:05.353 ***** 2026-02-14 06:02:14.274127 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.274138 | orchestrator | 2026-02-14 06:02:14.274148 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:02:14.274159 | orchestrator | Saturday 14 February 2026 06:01:54 +0000 (0:00:01.142) 0:25:06.495 ***** 2026-02-14 06:02:14.274169 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-14 06:02:14.274180 | orchestrator | 2026-02-14 06:02:14.274191 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:02:14.274201 | orchestrator | Saturday 14 February 2026 06:01:55 +0000 (0:00:01.133) 0:25:07.629 ***** 2026-02-14 06:02:14.274212 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274222 | orchestrator | 2026-02-14 06:02:14.274233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:02:14.274243 | orchestrator | Saturday 14 February 2026 06:01:56 +0000 (0:00:01.186) 0:25:08.816 ***** 2026-02-14 06:02:14.274254 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274278 | orchestrator | 2026-02-14 06:02:14.274290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:02:14.274300 | orchestrator | Saturday 14 February 2026 06:01:57 +0000 (0:00:01.189) 0:25:10.005 ***** 2026-02-14 06:02:14.274311 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274321 | orchestrator | 2026-02-14 06:02:14.274332 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:02:14.274343 | orchestrator | Saturday 14 February 2026 06:01:58 +0000 (0:00:01.232) 0:25:11.237 ***** 2026-02-14 06:02:14.274353 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274364 | orchestrator | 2026-02-14 06:02:14.274375 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:02:14.274385 | orchestrator | Saturday 14 February 2026 06:02:00 +0000 (0:00:01.149) 0:25:12.387 ***** 2026-02-14 06:02:14.274395 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274406 | orchestrator | 2026-02-14 06:02:14.274417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:02:14.274428 | orchestrator | Saturday 14 February 2026 06:02:01 +0000 (0:00:01.239) 0:25:13.626 ***** 2026-02-14 06:02:14.274438 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274449 | orchestrator | 2026-02-14 06:02:14.274460 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:02:14.274470 | orchestrator | Saturday 14 February 2026 06:02:02 +0000 (0:00:01.140) 0:25:14.767 ***** 2026-02-14 06:02:14.274481 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274491 | orchestrator | 2026-02-14 06:02:14.274502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:02:14.274513 | orchestrator | Saturday 14 February 2026 06:02:03 +0000 (0:00:01.193) 0:25:15.961 ***** 2026-02-14 06:02:14.274523 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:02:14.274534 | orchestrator | 2026-02-14 06:02:14.274544 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:02:14.274555 | orchestrator | Saturday 14 February 2026 06:02:04 +0000 (0:00:01.154) 0:25:17.116 ***** 2026-02-14 06:02:14.274565 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:02:14.274576 | orchestrator | 2026-02-14 06:02:14.274594 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:02:14.274605 | orchestrator | Saturday 14 February 2026 06:02:06 +0000 (0:00:01.448) 0:25:18.564 ***** 2026-02-14 06:02:14.274616 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-14 06:02:14.274627 | orchestrator | 2026-02-14 06:02:14.274638 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:02:14.274649 | orchestrator | Saturday 14 February 2026 06:02:07 +0000 (0:00:01.191) 0:25:19.756 ***** 2026-02-14 06:02:14.274659 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-14 06:02:14.274670 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-14 06:02:14.274681 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-14 06:02:14.274691 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-14 06:02:14.274720 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-14 06:02:14.274731 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-14 06:02:14.274742 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-14 06:02:14.274752 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:02:14.274763 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:02:14.274774 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:02:14.274785 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:02:14.274796 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:02:14.274807 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:02:14.274825 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:02:14.274836 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-14 06:02:14.274846 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-14 06:02:14.274857 | orchestrator | 2026-02-14 06:02:14.274875 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:03:11.388079 | orchestrator | Saturday 14 February 2026 06:02:14 +0000 (0:00:06.801) 0:25:26.558 ***** 2026-02-14 06:03:11.388226 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388254 | orchestrator | 2026-02-14 06:03:11.388274 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:03:11.388292 | orchestrator | Saturday 14 February 2026 06:02:15 +0000 (0:00:01.403) 0:25:27.961 ***** 2026-02-14 06:03:11.388309 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388326 | orchestrator | 2026-02-14 06:03:11.388343 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:03:11.388361 | orchestrator | Saturday 14 February 2026 06:02:16 +0000 (0:00:01.154) 0:25:29.116 ***** 2026-02-14 06:03:11.388380 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388397 | orchestrator | 2026-02-14 06:03:11.388414 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:03:11.388431 | orchestrator | Saturday 14 February 2026 06:02:18 +0000 (0:00:01.231) 0:25:30.348 ***** 2026-02-14 06:03:11.388447 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388465 | orchestrator | 2026-02-14 06:03:11.388484 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:03:11.388503 | orchestrator | Saturday 14 February 2026 06:02:19 +0000 (0:00:01.164) 0:25:31.512 ***** 2026-02-14 06:03:11.388522 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388541 | orchestrator | 2026-02-14 06:03:11.388558 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:03:11.388576 | orchestrator | Saturday 14 February 2026 06:02:20 +0000 (0:00:01.265) 0:25:32.778 ***** 2026-02-14 06:03:11.388595 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388614 | orchestrator | 2026-02-14 06:03:11.388631 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:03:11.388650 | orchestrator | Saturday 14 February 2026 06:02:21 +0000 (0:00:01.158) 0:25:33.936 ***** 2026-02-14 06:03:11.388668 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388686 | orchestrator | 2026-02-14 06:03:11.388704 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:03:11.388724 | orchestrator | Saturday 14 February 2026 06:02:22 +0000 (0:00:01.167) 0:25:35.104 ***** 2026-02-14 06:03:11.388735 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388751 | orchestrator | 2026-02-14 06:03:11.388770 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:03:11.388789 | orchestrator | Saturday 14 February 2026 06:02:23 +0000 (0:00:01.168) 0:25:36.273 ***** 2026-02-14 06:03:11.388806 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388825 | orchestrator | 2026-02-14 06:03:11.388845 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:03:11.388863 | orchestrator | Saturday 14 February 2026 06:02:25 +0000 (0:00:01.175) 0:25:37.448 ***** 2026-02-14 06:03:11.388874 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388885 | orchestrator | 2026-02-14 06:03:11.388923 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:03:11.388937 | orchestrator | Saturday 14 February 2026 06:02:26 +0000 (0:00:01.168) 0:25:38.617 ***** 2026-02-14 06:03:11.388948 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.388959 | orchestrator | 2026-02-14 06:03:11.388970 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:03:11.388981 | orchestrator | Saturday 14 February 2026 06:02:27 +0000 (0:00:01.323) 0:25:39.940 ***** 2026-02-14 06:03:11.389024 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389043 | orchestrator | 2026-02-14 06:03:11.389061 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:03:11.389078 | orchestrator | Saturday 14 February 2026 06:02:28 +0000 (0:00:01.140) 0:25:41.081 ***** 2026-02-14 06:03:11.389097 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389114 | orchestrator | 2026-02-14 06:03:11.389133 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:03:11.389152 | orchestrator | Saturday 14 February 2026 06:02:30 +0000 (0:00:01.266) 0:25:42.347 ***** 2026-02-14 06:03:11.389170 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389189 | orchestrator | 2026-02-14 06:03:11.389208 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:03:11.389220 | orchestrator | Saturday 14 February 2026 06:02:31 +0000 (0:00:01.114) 0:25:43.462 ***** 2026-02-14 06:03:11.389231 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389241 | orchestrator | 2026-02-14 06:03:11.389253 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:03:11.389264 | orchestrator | Saturday 14 February 2026 06:02:32 +0000 (0:00:01.226) 0:25:44.689 ***** 2026-02-14 06:03:11.389275 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389285 | orchestrator | 2026-02-14 06:03:11.389296 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:03:11.389407 | orchestrator | Saturday 14 February 2026 06:02:33 +0000 (0:00:01.152) 0:25:45.841 ***** 2026-02-14 06:03:11.389429 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389440 | orchestrator | 2026-02-14 06:03:11.389451 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:03:11.389464 | orchestrator | Saturday 14 February 2026 06:02:34 +0000 (0:00:01.124) 0:25:46.965 ***** 2026-02-14 06:03:11.389475 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389485 | orchestrator | 2026-02-14 06:03:11.389496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:03:11.389507 | orchestrator | Saturday 14 February 2026 06:02:35 +0000 (0:00:01.290) 0:25:48.256 ***** 2026-02-14 06:03:11.389518 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389529 | orchestrator | 2026-02-14 06:03:11.389540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:03:11.389551 | orchestrator | Saturday 14 February 2026 06:02:37 +0000 (0:00:01.162) 0:25:49.418 ***** 2026-02-14 06:03:11.389562 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389573 | orchestrator | 2026-02-14 06:03:11.389608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:03:11.389620 | orchestrator | Saturday 14 February 2026 06:02:38 +0000 (0:00:01.185) 0:25:50.604 ***** 2026-02-14 06:03:11.389630 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389642 | orchestrator | 2026-02-14 06:03:11.389652 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:03:11.389663 | orchestrator | Saturday 14 February 2026 06:02:39 +0000 (0:00:01.216) 0:25:51.820 ***** 2026-02-14 06:03:11.389674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 06:03:11.389685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 06:03:11.389696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 06:03:11.389707 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389718 | orchestrator | 2026-02-14 06:03:11.389729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:03:11.389739 | orchestrator | Saturday 14 February 2026 06:02:41 +0000 (0:00:01.900) 0:25:53.721 ***** 2026-02-14 06:03:11.389750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 06:03:11.389761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 06:03:11.389772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 06:03:11.389795 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389807 | orchestrator | 2026-02-14 06:03:11.389817 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:03:11.389828 | orchestrator | Saturday 14 February 2026 06:02:43 +0000 (0:00:01.927) 0:25:55.648 ***** 2026-02-14 06:03:11.389839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 06:03:11.389850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-14 06:03:11.389860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 06:03:11.389871 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389882 | orchestrator | 2026-02-14 06:03:11.389892 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:03:11.389930 | orchestrator | Saturday 14 February 2026 06:02:45 +0000 (0:00:02.107) 0:25:57.756 ***** 2026-02-14 06:03:11.389941 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.389952 | orchestrator | 2026-02-14 06:03:11.389963 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:03:11.389974 | orchestrator | Saturday 14 February 2026 06:02:46 +0000 (0:00:01.161) 0:25:58.917 ***** 2026-02-14 06:03:11.389986 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-14 06:03:11.389997 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.390008 | orchestrator | 2026-02-14 06:03:11.390102 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:03:11.390117 | orchestrator | Saturday 14 February 2026 06:02:47 +0000 (0:00:01.294) 0:26:00.212 ***** 2026-02-14 06:03:11.390128 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:11.390139 | orchestrator | 2026-02-14 06:03:11.390150 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:03:11.390161 | orchestrator | Saturday 14 February 2026 06:02:49 +0000 (0:00:01.858) 0:26:02.070 ***** 2026-02-14 06:03:11.390173 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:03:11.390194 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:03:11.390215 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:03:11.390235 | orchestrator | 2026-02-14 06:03:11.390254 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 06:03:11.390273 | orchestrator | Saturday 14 February 2026 06:02:51 +0000 (0:00:01.811) 0:26:03.882 ***** 2026-02-14 06:03:11.390301 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-14 06:03:11.390319 | orchestrator | 2026-02-14 06:03:11.390338 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-14 06:03:11.390356 | orchestrator | Saturday 14 February 2026 06:02:53 +0000 (0:00:01.674) 0:26:05.556 ***** 2026-02-14 06:03:11.390373 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:11.390391 | orchestrator | 2026-02-14 06:03:11.390410 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-14 06:03:11.390427 | orchestrator | Saturday 14 February 2026 06:02:54 +0000 (0:00:01.568) 0:26:07.125 ***** 2026-02-14 06:03:11.390445 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:11.390463 | orchestrator | 2026-02-14 06:03:11.390481 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-14 06:03:11.390501 | orchestrator | Saturday 14 February 2026 06:02:55 +0000 (0:00:01.135) 0:26:08.261 ***** 2026-02-14 06:03:11.390519 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 06:03:11.390538 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 06:03:11.390557 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 06:03:11.390577 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-14 06:03:11.390597 | orchestrator | 2026-02-14 06:03:11.390617 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-14 06:03:11.390636 | orchestrator | Saturday 14 February 2026 06:03:04 +0000 (0:00:08.118) 0:26:16.379 ***** 2026-02-14 06:03:11.390670 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:11.390691 | orchestrator | 2026-02-14 06:03:11.390711 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-14 06:03:11.390732 | orchestrator | Saturday 14 February 2026 06:03:05 +0000 (0:00:01.250) 0:26:17.630 ***** 2026-02-14 06:03:11.390752 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-14 06:03:11.390767 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 06:03:11.390777 | orchestrator | 2026-02-14 06:03:11.390788 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:03:11.390799 | orchestrator | Saturday 14 February 2026 06:03:09 +0000 (0:00:04.067) 0:26:21.698 ***** 2026-02-14 06:03:11.390826 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-14 06:03:59.976259 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-14 06:03:59.976375 | orchestrator | 2026-02-14 06:03:59.976391 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-14 06:03:59.976404 | orchestrator | Saturday 14 February 2026 06:03:11 +0000 (0:00:02.004) 0:26:23.703 ***** 2026-02-14 06:03:59.976416 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:59.976428 | orchestrator | 2026-02-14 06:03:59.976439 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-14 06:03:59.976450 | orchestrator | Saturday 14 February 2026 06:03:12 +0000 (0:00:01.542) 0:26:25.246 ***** 2026-02-14 06:03:59.976461 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:59.976472 | orchestrator | 2026-02-14 06:03:59.976483 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 06:03:59.976494 | orchestrator | Saturday 14 February 2026 06:03:14 +0000 (0:00:01.166) 0:26:26.412 ***** 2026-02-14 06:03:59.976504 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:59.976515 | orchestrator | 2026-02-14 06:03:59.976530 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 06:03:59.976541 | orchestrator | Saturday 14 February 2026 06:03:15 +0000 (0:00:01.384) 0:26:27.797 ***** 2026-02-14 06:03:59.976552 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-14 06:03:59.976564 | orchestrator | 2026-02-14 06:03:59.976574 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-14 06:03:59.976585 | orchestrator | Saturday 14 February 2026 06:03:17 +0000 (0:00:01.557) 0:26:29.355 ***** 2026-02-14 06:03:59.976596 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:59.976607 | orchestrator | 2026-02-14 06:03:59.976618 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-14 06:03:59.976628 | orchestrator | Saturday 14 February 2026 06:03:18 +0000 (0:00:01.188) 0:26:30.543 ***** 2026-02-14 06:03:59.976639 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:59.976650 | orchestrator | 2026-02-14 06:03:59.976661 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-14 06:03:59.976671 | orchestrator | Saturday 14 February 2026 06:03:19 +0000 (0:00:01.158) 0:26:31.702 ***** 2026-02-14 06:03:59.976682 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-14 06:03:59.976693 | orchestrator | 2026-02-14 06:03:59.976704 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-14 06:03:59.976714 | orchestrator | Saturday 14 February 2026 06:03:20 +0000 (0:00:01.508) 0:26:33.210 ***** 2026-02-14 06:03:59.976725 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:59.976736 | orchestrator | 2026-02-14 06:03:59.976747 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-14 06:03:59.976757 | orchestrator | Saturday 14 February 2026 06:03:23 +0000 (0:00:02.120) 0:26:35.330 ***** 2026-02-14 06:03:59.976768 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:59.976779 | orchestrator | 2026-02-14 06:03:59.976791 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-14 06:03:59.976804 | orchestrator | Saturday 14 February 2026 06:03:25 +0000 (0:00:02.012) 0:26:37.342 ***** 2026-02-14 06:03:59.976841 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:03:59.976855 | orchestrator | 2026-02-14 06:03:59.976868 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-14 06:03:59.976880 | orchestrator | Saturday 14 February 2026 06:03:27 +0000 (0:00:02.603) 0:26:39.946 ***** 2026-02-14 06:03:59.976919 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:03:59.976932 | orchestrator | 2026-02-14 06:03:59.976945 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 06:03:59.976958 | orchestrator | Saturday 14 February 2026 06:03:31 +0000 (0:00:03.938) 0:26:43.884 ***** 2026-02-14 06:03:59.976971 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:03:59.976983 | orchestrator | 2026-02-14 06:03:59.977009 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-14 06:03:59.977023 | orchestrator | 2026-02-14 06:03:59.977035 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 06:03:59.977048 | orchestrator | Saturday 14 February 2026 06:03:32 +0000 (0:00:01.444) 0:26:45.329 ***** 2026-02-14 06:03:59.977060 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:03:59.977073 | orchestrator | 2026-02-14 06:03:59.977085 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-14 06:03:59.977097 | orchestrator | Saturday 14 February 2026 06:03:35 +0000 (0:00:02.613) 0:26:47.942 ***** 2026-02-14 06:03:59.977109 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:03:59.977122 | orchestrator | 2026-02-14 06:03:59.977135 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:03:59.977147 | orchestrator | Saturday 14 February 2026 06:03:37 +0000 (0:00:02.201) 0:26:50.143 ***** 2026-02-14 06:03:59.977158 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-14 06:03:59.977168 | orchestrator | 2026-02-14 06:03:59.977179 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:03:59.977189 | orchestrator | Saturday 14 February 2026 06:03:39 +0000 (0:00:01.195) 0:26:51.339 ***** 2026-02-14 06:03:59.977200 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977211 | orchestrator | 2026-02-14 06:03:59.977221 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:03:59.977232 | orchestrator | Saturday 14 February 2026 06:03:40 +0000 (0:00:01.548) 0:26:52.888 ***** 2026-02-14 06:03:59.977242 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977253 | orchestrator | 2026-02-14 06:03:59.977264 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:03:59.977275 | orchestrator | Saturday 14 February 2026 06:03:41 +0000 (0:00:01.138) 0:26:54.027 ***** 2026-02-14 06:03:59.977285 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977296 | orchestrator | 2026-02-14 06:03:59.977306 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:03:59.977317 | orchestrator | Saturday 14 February 2026 06:03:43 +0000 (0:00:01.528) 0:26:55.556 ***** 2026-02-14 06:03:59.977328 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977338 | orchestrator | 2026-02-14 06:03:59.977366 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:03:59.977378 | orchestrator | Saturday 14 February 2026 06:03:44 +0000 (0:00:01.183) 0:26:56.740 ***** 2026-02-14 06:03:59.977389 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977399 | orchestrator | 2026-02-14 06:03:59.977410 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:03:59.977420 | orchestrator | Saturday 14 February 2026 06:03:45 +0000 (0:00:01.369) 0:26:58.109 ***** 2026-02-14 06:03:59.977431 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977442 | orchestrator | 2026-02-14 06:03:59.977452 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:03:59.977463 | orchestrator | Saturday 14 February 2026 06:03:46 +0000 (0:00:01.162) 0:26:59.271 ***** 2026-02-14 06:03:59.977474 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:03:59.977485 | orchestrator | 2026-02-14 06:03:59.977496 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:03:59.977515 | orchestrator | Saturday 14 February 2026 06:03:48 +0000 (0:00:01.182) 0:27:00.454 ***** 2026-02-14 06:03:59.977525 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977536 | orchestrator | 2026-02-14 06:03:59.977547 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:03:59.977557 | orchestrator | Saturday 14 February 2026 06:03:49 +0000 (0:00:01.127) 0:27:01.581 ***** 2026-02-14 06:03:59.977568 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:03:59.977579 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:03:59.977589 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:03:59.977600 | orchestrator | 2026-02-14 06:03:59.977611 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:03:59.977621 | orchestrator | Saturday 14 February 2026 06:03:51 +0000 (0:00:01.761) 0:27:03.343 ***** 2026-02-14 06:03:59.977632 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:03:59.977643 | orchestrator | 2026-02-14 06:03:59.977654 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:03:59.977664 | orchestrator | Saturday 14 February 2026 06:03:52 +0000 (0:00:01.301) 0:27:04.644 ***** 2026-02-14 06:03:59.977675 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:03:59.977685 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:03:59.977696 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:03:59.977707 | orchestrator | 2026-02-14 06:03:59.977717 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:03:59.977728 | orchestrator | Saturday 14 February 2026 06:03:55 +0000 (0:00:02.903) 0:27:07.547 ***** 2026-02-14 06:03:59.977739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 06:03:59.977750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 06:03:59.977761 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 06:03:59.977772 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:03:59.977782 | orchestrator | 2026-02-14 06:03:59.977793 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:03:59.977804 | orchestrator | Saturday 14 February 2026 06:03:56 +0000 (0:00:01.468) 0:27:09.016 ***** 2026-02-14 06:03:59.977816 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:03:59.977836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:03:59.977847 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:03:59.977858 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:03:59.977869 | orchestrator | 2026-02-14 06:03:59.977880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:03:59.977944 | orchestrator | Saturday 14 February 2026 06:03:58 +0000 (0:00:02.072) 0:27:11.089 ***** 2026-02-14 06:03:59.977969 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:03:59.977994 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:03:59.978015 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:20.347206 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347323 | orchestrator | 2026-02-14 06:04:20.347339 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:04:20.347352 | orchestrator | Saturday 14 February 2026 06:03:59 +0000 (0:00:01.198) 0:27:12.288 ***** 2026-02-14 06:04:20.347365 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:03:52.840356', 'end': '2026-02-14 06:03:52.888631', 'delta': '0:00:00.048275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:04:20.347379 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:03:53.444572', 'end': '2026-02-14 06:03:53.493630', 'delta': '0:00:00.049058', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:04:20.347405 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:03:54.001715', 'end': '2026-02-14 06:03:54.044203', 'delta': '0:00:00.042488', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:04:20.347415 | orchestrator | 2026-02-14 06:04:20.347426 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:04:20.347436 | orchestrator | Saturday 14 February 2026 06:04:01 +0000 (0:00:01.252) 0:27:13.541 ***** 2026-02-14 06:04:20.347446 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:20.347457 | orchestrator | 2026-02-14 06:04:20.347466 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:04:20.347476 | orchestrator | Saturday 14 February 2026 06:04:02 +0000 (0:00:01.301) 0:27:14.842 ***** 2026-02-14 06:04:20.347507 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347518 | orchestrator | 2026-02-14 06:04:20.347528 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:04:20.347537 | orchestrator | Saturday 14 February 2026 06:04:03 +0000 (0:00:01.266) 0:27:16.109 ***** 2026-02-14 06:04:20.347547 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:20.347556 | orchestrator | 2026-02-14 06:04:20.347566 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:04:20.347575 | orchestrator | Saturday 14 February 2026 06:04:05 +0000 (0:00:01.350) 0:27:17.459 ***** 2026-02-14 06:04:20.347585 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:04:20.347594 | orchestrator | 2026-02-14 06:04:20.347604 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:04:20.347614 | orchestrator | Saturday 14 February 2026 06:04:07 +0000 (0:00:02.057) 0:27:19.517 ***** 2026-02-14 06:04:20.347623 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:20.347633 | orchestrator | 2026-02-14 06:04:20.347642 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:04:20.347652 | orchestrator | Saturday 14 February 2026 06:04:08 +0000 (0:00:01.160) 0:27:20.677 ***** 2026-02-14 06:04:20.347661 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347670 | orchestrator | 2026-02-14 06:04:20.347680 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:04:20.347689 | orchestrator | Saturday 14 February 2026 06:04:09 +0000 (0:00:01.282) 0:27:21.959 ***** 2026-02-14 06:04:20.347699 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347708 | orchestrator | 2026-02-14 06:04:20.347718 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:04:20.347727 | orchestrator | Saturday 14 February 2026 06:04:10 +0000 (0:00:01.310) 0:27:23.270 ***** 2026-02-14 06:04:20.347737 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347748 | orchestrator | 2026-02-14 06:04:20.347777 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:04:20.347789 | orchestrator | Saturday 14 February 2026 06:04:12 +0000 (0:00:01.129) 0:27:24.399 ***** 2026-02-14 06:04:20.347818 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347839 | orchestrator | 2026-02-14 06:04:20.347851 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:04:20.347862 | orchestrator | Saturday 14 February 2026 06:04:13 +0000 (0:00:01.152) 0:27:25.552 ***** 2026-02-14 06:04:20.347873 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347885 | orchestrator | 2026-02-14 06:04:20.347917 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:04:20.347927 | orchestrator | Saturday 14 February 2026 06:04:14 +0000 (0:00:01.135) 0:27:26.688 ***** 2026-02-14 06:04:20.347937 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347946 | orchestrator | 2026-02-14 06:04:20.347956 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:04:20.347965 | orchestrator | Saturday 14 February 2026 06:04:15 +0000 (0:00:01.182) 0:27:27.871 ***** 2026-02-14 06:04:20.347974 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.347984 | orchestrator | 2026-02-14 06:04:20.347994 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:04:20.348003 | orchestrator | Saturday 14 February 2026 06:04:16 +0000 (0:00:01.143) 0:27:29.014 ***** 2026-02-14 06:04:20.348013 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.348022 | orchestrator | 2026-02-14 06:04:20.348032 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:04:20.348042 | orchestrator | Saturday 14 February 2026 06:04:17 +0000 (0:00:01.134) 0:27:30.149 ***** 2026-02-14 06:04:20.348051 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:20.348061 | orchestrator | 2026-02-14 06:04:20.348070 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:04:20.348087 | orchestrator | Saturday 14 February 2026 06:04:19 +0000 (0:00:01.184) 0:27:31.333 ***** 2026-02-14 06:04:20.348098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:20.348116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:20.348127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:20.348138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:04:20.348150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:20.348160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:20.348177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:21.786296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:04:21.786425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:21.786443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:04:21.786456 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:21.786469 | orchestrator | 2026-02-14 06:04:21.786481 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:04:21.786494 | orchestrator | Saturday 14 February 2026 06:04:20 +0000 (0:00:01.323) 0:27:32.657 ***** 2026-02-14 06:04:21.786508 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786540 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786553 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786575 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786594 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786610 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786631 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:21.786680 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '582964e9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_582964e9-d5ca-49cb-a5b2-57a438ee9ec9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:57.791520 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:57.791642 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:04:57.791659 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.791674 | orchestrator | 2026-02-14 06:04:57.791686 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:04:57.791699 | orchestrator | Saturday 14 February 2026 06:04:21 +0000 (0:00:01.444) 0:27:34.101 ***** 2026-02-14 06:04:57.791711 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.791723 | orchestrator | 2026-02-14 06:04:57.791735 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:04:57.791745 | orchestrator | Saturday 14 February 2026 06:04:23 +0000 (0:00:01.561) 0:27:35.663 ***** 2026-02-14 06:04:57.791756 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.791767 | orchestrator | 2026-02-14 06:04:57.791778 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:04:57.791789 | orchestrator | Saturday 14 February 2026 06:04:24 +0000 (0:00:01.179) 0:27:36.843 ***** 2026-02-14 06:04:57.791800 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.791811 | orchestrator | 2026-02-14 06:04:57.791821 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:04:57.791832 | orchestrator | Saturday 14 February 2026 06:04:26 +0000 (0:00:01.621) 0:27:38.465 ***** 2026-02-14 06:04:57.791869 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.791882 | orchestrator | 2026-02-14 06:04:57.791921 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:04:57.791932 | orchestrator | Saturday 14 February 2026 06:04:27 +0000 (0:00:01.109) 0:27:39.575 ***** 2026-02-14 06:04:57.791943 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.791954 | orchestrator | 2026-02-14 06:04:57.791965 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:04:57.791976 | orchestrator | Saturday 14 February 2026 06:04:28 +0000 (0:00:01.280) 0:27:40.856 ***** 2026-02-14 06:04:57.791987 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.791997 | orchestrator | 2026-02-14 06:04:57.792008 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:04:57.792020 | orchestrator | Saturday 14 February 2026 06:04:29 +0000 (0:00:01.126) 0:27:41.983 ***** 2026-02-14 06:04:57.792032 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-14 06:04:57.792045 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:04:57.792058 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-14 06:04:57.792071 | orchestrator | 2026-02-14 06:04:57.792083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:04:57.792095 | orchestrator | Saturday 14 February 2026 06:04:31 +0000 (0:00:01.774) 0:27:43.757 ***** 2026-02-14 06:04:57.792108 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-14 06:04:57.792121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-14 06:04:57.792134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-14 06:04:57.792147 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792160 | orchestrator | 2026-02-14 06:04:57.792172 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:04:57.792185 | orchestrator | Saturday 14 February 2026 06:04:32 +0000 (0:00:01.194) 0:27:44.952 ***** 2026-02-14 06:04:57.792197 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792209 | orchestrator | 2026-02-14 06:04:57.792221 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:04:57.792234 | orchestrator | Saturday 14 February 2026 06:04:33 +0000 (0:00:01.139) 0:27:46.092 ***** 2026-02-14 06:04:57.792247 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:04:57.792260 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:04:57.792273 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:04:57.792286 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:04:57.792298 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:04:57.792311 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:04:57.792353 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:04:57.792368 | orchestrator | 2026-02-14 06:04:57.792381 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:04:57.792392 | orchestrator | Saturday 14 February 2026 06:04:36 +0000 (0:00:02.349) 0:27:48.441 ***** 2026-02-14 06:04:57.792403 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:04:57.792414 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:04:57.792424 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:04:57.792435 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:04:57.792446 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:04:57.792457 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:04:57.792475 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:04:57.792487 | orchestrator | 2026-02-14 06:04:57.792497 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:04:57.792508 | orchestrator | Saturday 14 February 2026 06:04:38 +0000 (0:00:02.455) 0:27:50.897 ***** 2026-02-14 06:04:57.792519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-14 06:04:57.792530 | orchestrator | 2026-02-14 06:04:57.792541 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:04:57.792551 | orchestrator | Saturday 14 February 2026 06:04:39 +0000 (0:00:01.424) 0:27:52.322 ***** 2026-02-14 06:04:57.792562 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-14 06:04:57.792573 | orchestrator | 2026-02-14 06:04:57.792584 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:04:57.792595 | orchestrator | Saturday 14 February 2026 06:04:41 +0000 (0:00:01.176) 0:27:53.499 ***** 2026-02-14 06:04:57.792605 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.792616 | orchestrator | 2026-02-14 06:04:57.792627 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:04:57.792638 | orchestrator | Saturday 14 February 2026 06:04:42 +0000 (0:00:01.624) 0:27:55.124 ***** 2026-02-14 06:04:57.792649 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792660 | orchestrator | 2026-02-14 06:04:57.792671 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:04:57.792681 | orchestrator | Saturday 14 February 2026 06:04:43 +0000 (0:00:01.138) 0:27:56.262 ***** 2026-02-14 06:04:57.792692 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792703 | orchestrator | 2026-02-14 06:04:57.792714 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:04:57.792724 | orchestrator | Saturday 14 February 2026 06:04:45 +0000 (0:00:01.144) 0:27:57.407 ***** 2026-02-14 06:04:57.792735 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792746 | orchestrator | 2026-02-14 06:04:57.792757 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:04:57.792768 | orchestrator | Saturday 14 February 2026 06:04:46 +0000 (0:00:01.287) 0:27:58.694 ***** 2026-02-14 06:04:57.792779 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.792789 | orchestrator | 2026-02-14 06:04:57.792800 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:04:57.792811 | orchestrator | Saturday 14 February 2026 06:04:47 +0000 (0:00:01.593) 0:28:00.287 ***** 2026-02-14 06:04:57.792822 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792833 | orchestrator | 2026-02-14 06:04:57.792844 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:04:57.792855 | orchestrator | Saturday 14 February 2026 06:04:49 +0000 (0:00:01.138) 0:28:01.426 ***** 2026-02-14 06:04:57.792865 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.792876 | orchestrator | 2026-02-14 06:04:57.792917 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:04:57.792929 | orchestrator | Saturday 14 February 2026 06:04:50 +0000 (0:00:01.187) 0:28:02.614 ***** 2026-02-14 06:04:57.792940 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.792951 | orchestrator | 2026-02-14 06:04:57.792961 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:04:57.792972 | orchestrator | Saturday 14 February 2026 06:04:51 +0000 (0:00:01.590) 0:28:04.205 ***** 2026-02-14 06:04:57.792983 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.792993 | orchestrator | 2026-02-14 06:04:57.793004 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:04:57.793015 | orchestrator | Saturday 14 February 2026 06:04:53 +0000 (0:00:01.549) 0:28:05.754 ***** 2026-02-14 06:04:57.793026 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.793036 | orchestrator | 2026-02-14 06:04:57.793057 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:04:57.793068 | orchestrator | Saturday 14 February 2026 06:04:54 +0000 (0:00:00.989) 0:28:06.744 ***** 2026-02-14 06:04:57.793079 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:04:57.793090 | orchestrator | 2026-02-14 06:04:57.793100 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:04:57.793111 | orchestrator | Saturday 14 February 2026 06:04:55 +0000 (0:00:00.829) 0:28:07.573 ***** 2026-02-14 06:04:57.793122 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.793140 | orchestrator | 2026-02-14 06:04:57.793159 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:04:57.793177 | orchestrator | Saturday 14 February 2026 06:04:56 +0000 (0:00:00.883) 0:28:08.457 ***** 2026-02-14 06:04:57.793195 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:04:57.793214 | orchestrator | 2026-02-14 06:04:57.793232 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:04:57.793258 | orchestrator | Saturday 14 February 2026 06:04:56 +0000 (0:00:00.813) 0:28:09.271 ***** 2026-02-14 06:04:57.793279 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978368 | orchestrator | 2026-02-14 06:05:38.978492 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:05:38.978510 | orchestrator | Saturday 14 February 2026 06:04:57 +0000 (0:00:00.833) 0:28:10.104 ***** 2026-02-14 06:05:38.978522 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978535 | orchestrator | 2026-02-14 06:05:38.978546 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:05:38.978557 | orchestrator | Saturday 14 February 2026 06:04:58 +0000 (0:00:00.820) 0:28:10.925 ***** 2026-02-14 06:05:38.978568 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978579 | orchestrator | 2026-02-14 06:05:38.978590 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:05:38.978601 | orchestrator | Saturday 14 February 2026 06:04:59 +0000 (0:00:00.805) 0:28:11.730 ***** 2026-02-14 06:05:38.978612 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.978624 | orchestrator | 2026-02-14 06:05:38.978635 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:05:38.978646 | orchestrator | Saturday 14 February 2026 06:05:00 +0000 (0:00:00.852) 0:28:12.583 ***** 2026-02-14 06:05:38.978657 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.978668 | orchestrator | 2026-02-14 06:05:38.978679 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:05:38.978689 | orchestrator | Saturday 14 February 2026 06:05:01 +0000 (0:00:00.833) 0:28:13.417 ***** 2026-02-14 06:05:38.978700 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.978711 | orchestrator | 2026-02-14 06:05:38.978722 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:05:38.978732 | orchestrator | Saturday 14 February 2026 06:05:01 +0000 (0:00:00.818) 0:28:14.235 ***** 2026-02-14 06:05:38.978743 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978754 | orchestrator | 2026-02-14 06:05:38.978765 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:05:38.978776 | orchestrator | Saturday 14 February 2026 06:05:02 +0000 (0:00:00.780) 0:28:15.015 ***** 2026-02-14 06:05:38.978787 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978798 | orchestrator | 2026-02-14 06:05:38.978809 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:05:38.978820 | orchestrator | Saturday 14 February 2026 06:05:03 +0000 (0:00:00.804) 0:28:15.820 ***** 2026-02-14 06:05:38.978830 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978841 | orchestrator | 2026-02-14 06:05:38.978852 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:05:38.978862 | orchestrator | Saturday 14 February 2026 06:05:04 +0000 (0:00:00.773) 0:28:16.593 ***** 2026-02-14 06:05:38.978873 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978913 | orchestrator | 2026-02-14 06:05:38.978955 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:05:38.978969 | orchestrator | Saturday 14 February 2026 06:05:05 +0000 (0:00:00.909) 0:28:17.502 ***** 2026-02-14 06:05:38.978982 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.978994 | orchestrator | 2026-02-14 06:05:38.979007 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:05:38.979020 | orchestrator | Saturday 14 February 2026 06:05:05 +0000 (0:00:00.790) 0:28:18.292 ***** 2026-02-14 06:05:38.979033 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979046 | orchestrator | 2026-02-14 06:05:38.979058 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:05:38.979071 | orchestrator | Saturday 14 February 2026 06:05:06 +0000 (0:00:00.757) 0:28:19.050 ***** 2026-02-14 06:05:38.979084 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979097 | orchestrator | 2026-02-14 06:05:38.979109 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:05:38.979123 | orchestrator | Saturday 14 February 2026 06:05:07 +0000 (0:00:00.744) 0:28:19.795 ***** 2026-02-14 06:05:38.979135 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979149 | orchestrator | 2026-02-14 06:05:38.979161 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:05:38.979174 | orchestrator | Saturday 14 February 2026 06:05:08 +0000 (0:00:00.778) 0:28:20.573 ***** 2026-02-14 06:05:38.979187 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979199 | orchestrator | 2026-02-14 06:05:38.979211 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:05:38.979224 | orchestrator | Saturday 14 February 2026 06:05:08 +0000 (0:00:00.754) 0:28:21.328 ***** 2026-02-14 06:05:38.979236 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979248 | orchestrator | 2026-02-14 06:05:38.979261 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:05:38.979273 | orchestrator | Saturday 14 February 2026 06:05:09 +0000 (0:00:00.767) 0:28:22.095 ***** 2026-02-14 06:05:38.979284 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979295 | orchestrator | 2026-02-14 06:05:38.979306 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:05:38.979316 | orchestrator | Saturday 14 February 2026 06:05:10 +0000 (0:00:00.780) 0:28:22.876 ***** 2026-02-14 06:05:38.979327 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979338 | orchestrator | 2026-02-14 06:05:38.979349 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:05:38.979359 | orchestrator | Saturday 14 February 2026 06:05:11 +0000 (0:00:00.856) 0:28:23.733 ***** 2026-02-14 06:05:38.979370 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.979381 | orchestrator | 2026-02-14 06:05:38.979392 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:05:38.979402 | orchestrator | Saturday 14 February 2026 06:05:13 +0000 (0:00:01.651) 0:28:25.384 ***** 2026-02-14 06:05:38.979413 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.979424 | orchestrator | 2026-02-14 06:05:38.979434 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:05:38.979446 | orchestrator | Saturday 14 February 2026 06:05:15 +0000 (0:00:02.173) 0:28:27.558 ***** 2026-02-14 06:05:38.979471 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-14 06:05:38.979484 | orchestrator | 2026-02-14 06:05:38.979514 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:05:38.979526 | orchestrator | Saturday 14 February 2026 06:05:16 +0000 (0:00:01.278) 0:28:28.836 ***** 2026-02-14 06:05:38.979537 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979547 | orchestrator | 2026-02-14 06:05:38.979558 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:05:38.979569 | orchestrator | Saturday 14 February 2026 06:05:17 +0000 (0:00:01.159) 0:28:29.995 ***** 2026-02-14 06:05:38.979593 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979603 | orchestrator | 2026-02-14 06:05:38.979614 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:05:38.979625 | orchestrator | Saturday 14 February 2026 06:05:18 +0000 (0:00:01.171) 0:28:31.167 ***** 2026-02-14 06:05:38.979635 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:05:38.979646 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:05:38.979657 | orchestrator | 2026-02-14 06:05:38.979667 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:05:38.979678 | orchestrator | Saturday 14 February 2026 06:05:20 +0000 (0:00:01.889) 0:28:33.056 ***** 2026-02-14 06:05:38.979689 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.979699 | orchestrator | 2026-02-14 06:05:38.979710 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:05:38.979721 | orchestrator | Saturday 14 February 2026 06:05:22 +0000 (0:00:01.536) 0:28:34.593 ***** 2026-02-14 06:05:38.979731 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979742 | orchestrator | 2026-02-14 06:05:38.979752 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:05:38.979763 | orchestrator | Saturday 14 February 2026 06:05:23 +0000 (0:00:01.144) 0:28:35.737 ***** 2026-02-14 06:05:38.979774 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979784 | orchestrator | 2026-02-14 06:05:38.979795 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:05:38.979806 | orchestrator | Saturday 14 February 2026 06:05:24 +0000 (0:00:00.821) 0:28:36.558 ***** 2026-02-14 06:05:38.979816 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.979827 | orchestrator | 2026-02-14 06:05:38.979838 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:05:38.979848 | orchestrator | Saturday 14 February 2026 06:05:25 +0000 (0:00:00.805) 0:28:37.364 ***** 2026-02-14 06:05:38.979859 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-14 06:05:38.979870 | orchestrator | 2026-02-14 06:05:38.979897 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:05:38.979909 | orchestrator | Saturday 14 February 2026 06:05:26 +0000 (0:00:01.206) 0:28:38.570 ***** 2026-02-14 06:05:38.979919 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.979930 | orchestrator | 2026-02-14 06:05:38.979941 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:05:38.979952 | orchestrator | Saturday 14 February 2026 06:05:27 +0000 (0:00:01.723) 0:28:40.294 ***** 2026-02-14 06:05:38.979963 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:05:38.979973 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:05:38.979984 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:05:38.979994 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980005 | orchestrator | 2026-02-14 06:05:38.980016 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:05:38.980027 | orchestrator | Saturday 14 February 2026 06:05:29 +0000 (0:00:01.172) 0:28:41.466 ***** 2026-02-14 06:05:38.980037 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980048 | orchestrator | 2026-02-14 06:05:38.980059 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:05:38.980069 | orchestrator | Saturday 14 February 2026 06:05:30 +0000 (0:00:01.167) 0:28:42.634 ***** 2026-02-14 06:05:38.980080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980091 | orchestrator | 2026-02-14 06:05:38.980102 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:05:38.980113 | orchestrator | Saturday 14 February 2026 06:05:31 +0000 (0:00:01.279) 0:28:43.914 ***** 2026-02-14 06:05:38.980123 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980142 | orchestrator | 2026-02-14 06:05:38.980153 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:05:38.980163 | orchestrator | Saturday 14 February 2026 06:05:32 +0000 (0:00:01.193) 0:28:45.108 ***** 2026-02-14 06:05:38.980174 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980185 | orchestrator | 2026-02-14 06:05:38.980196 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:05:38.980206 | orchestrator | Saturday 14 February 2026 06:05:33 +0000 (0:00:01.154) 0:28:46.262 ***** 2026-02-14 06:05:38.980217 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:05:38.980228 | orchestrator | 2026-02-14 06:05:38.980238 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:05:38.980249 | orchestrator | Saturday 14 February 2026 06:05:34 +0000 (0:00:00.816) 0:28:47.078 ***** 2026-02-14 06:05:38.980260 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.980270 | orchestrator | 2026-02-14 06:05:38.980281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:05:38.980292 | orchestrator | Saturday 14 February 2026 06:05:36 +0000 (0:00:02.231) 0:28:49.309 ***** 2026-02-14 06:05:38.980302 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:05:38.980313 | orchestrator | 2026-02-14 06:05:38.980324 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:05:38.980334 | orchestrator | Saturday 14 February 2026 06:05:37 +0000 (0:00:00.822) 0:28:50.132 ***** 2026-02-14 06:05:38.980350 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-14 06:05:38.980362 | orchestrator | 2026-02-14 06:05:38.980379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:06:17.182571 | orchestrator | Saturday 14 February 2026 06:05:38 +0000 (0:00:01.162) 0:28:51.294 ***** 2026-02-14 06:06:17.182691 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.182738 | orchestrator | 2026-02-14 06:06:17.182752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:06:17.182764 | orchestrator | Saturday 14 February 2026 06:05:40 +0000 (0:00:01.235) 0:28:52.530 ***** 2026-02-14 06:06:17.182776 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.182787 | orchestrator | 2026-02-14 06:06:17.182798 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:06:17.182809 | orchestrator | Saturday 14 February 2026 06:05:41 +0000 (0:00:01.189) 0:28:53.720 ***** 2026-02-14 06:06:17.182820 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.182830 | orchestrator | 2026-02-14 06:06:17.182841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:06:17.182852 | orchestrator | Saturday 14 February 2026 06:05:42 +0000 (0:00:01.195) 0:28:54.915 ***** 2026-02-14 06:06:17.182863 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.182874 | orchestrator | 2026-02-14 06:06:17.182924 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:06:17.182936 | orchestrator | Saturday 14 February 2026 06:05:43 +0000 (0:00:01.196) 0:28:56.112 ***** 2026-02-14 06:06:17.182947 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.182957 | orchestrator | 2026-02-14 06:06:17.182968 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:06:17.182979 | orchestrator | Saturday 14 February 2026 06:05:44 +0000 (0:00:01.173) 0:28:57.285 ***** 2026-02-14 06:06:17.182989 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183000 | orchestrator | 2026-02-14 06:06:17.183010 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:06:17.183021 | orchestrator | Saturday 14 February 2026 06:05:46 +0000 (0:00:01.249) 0:28:58.534 ***** 2026-02-14 06:06:17.183032 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183043 | orchestrator | 2026-02-14 06:06:17.183053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:06:17.183064 | orchestrator | Saturday 14 February 2026 06:05:47 +0000 (0:00:01.169) 0:28:59.704 ***** 2026-02-14 06:06:17.183101 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183115 | orchestrator | 2026-02-14 06:06:17.183128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:06:17.183141 | orchestrator | Saturday 14 February 2026 06:05:49 +0000 (0:00:01.642) 0:29:01.346 ***** 2026-02-14 06:06:17.183153 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:06:17.183166 | orchestrator | 2026-02-14 06:06:17.183179 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:06:17.183191 | orchestrator | Saturday 14 February 2026 06:05:49 +0000 (0:00:00.848) 0:29:02.195 ***** 2026-02-14 06:06:17.183204 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-14 06:06:17.183217 | orchestrator | 2026-02-14 06:06:17.183230 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:06:17.183243 | orchestrator | Saturday 14 February 2026 06:05:51 +0000 (0:00:01.156) 0:29:03.352 ***** 2026-02-14 06:06:17.183256 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-14 06:06:17.183268 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-14 06:06:17.183280 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-14 06:06:17.183293 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-14 06:06:17.183305 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-14 06:06:17.183318 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-14 06:06:17.183330 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-14 06:06:17.183343 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:06:17.183356 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:06:17.183369 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:06:17.183382 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:06:17.183395 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:06:17.183407 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:06:17.183420 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:06:17.183432 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-14 06:06:17.183445 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-14 06:06:17.183457 | orchestrator | 2026-02-14 06:06:17.183468 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:06:17.183479 | orchestrator | Saturday 14 February 2026 06:05:57 +0000 (0:00:06.593) 0:29:09.946 ***** 2026-02-14 06:06:17.183489 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183500 | orchestrator | 2026-02-14 06:06:17.183511 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:06:17.183521 | orchestrator | Saturday 14 February 2026 06:05:58 +0000 (0:00:00.796) 0:29:10.743 ***** 2026-02-14 06:06:17.183532 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183543 | orchestrator | 2026-02-14 06:06:17.183553 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:06:17.183564 | orchestrator | Saturday 14 February 2026 06:05:59 +0000 (0:00:00.828) 0:29:11.571 ***** 2026-02-14 06:06:17.183574 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183585 | orchestrator | 2026-02-14 06:06:17.183596 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:06:17.183606 | orchestrator | Saturday 14 February 2026 06:06:00 +0000 (0:00:00.851) 0:29:12.423 ***** 2026-02-14 06:06:17.183617 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183642 | orchestrator | 2026-02-14 06:06:17.183654 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:06:17.183696 | orchestrator | Saturday 14 February 2026 06:06:00 +0000 (0:00:00.821) 0:29:13.244 ***** 2026-02-14 06:06:17.183719 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183740 | orchestrator | 2026-02-14 06:06:17.183750 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:06:17.183761 | orchestrator | Saturday 14 February 2026 06:06:01 +0000 (0:00:00.788) 0:29:14.033 ***** 2026-02-14 06:06:17.183772 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183782 | orchestrator | 2026-02-14 06:06:17.183793 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:06:17.183804 | orchestrator | Saturday 14 February 2026 06:06:02 +0000 (0:00:00.805) 0:29:14.838 ***** 2026-02-14 06:06:17.183814 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183825 | orchestrator | 2026-02-14 06:06:17.183836 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:06:17.183846 | orchestrator | Saturday 14 February 2026 06:06:03 +0000 (0:00:00.822) 0:29:15.661 ***** 2026-02-14 06:06:17.183857 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183868 | orchestrator | 2026-02-14 06:06:17.183928 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:06:17.183941 | orchestrator | Saturday 14 February 2026 06:06:04 +0000 (0:00:00.820) 0:29:16.482 ***** 2026-02-14 06:06:17.183952 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.183963 | orchestrator | 2026-02-14 06:06:17.183974 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:06:17.183984 | orchestrator | Saturday 14 February 2026 06:06:04 +0000 (0:00:00.813) 0:29:17.296 ***** 2026-02-14 06:06:17.183995 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184006 | orchestrator | 2026-02-14 06:06:17.184016 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:06:17.184027 | orchestrator | Saturday 14 February 2026 06:06:05 +0000 (0:00:00.851) 0:29:18.148 ***** 2026-02-14 06:06:17.184038 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184048 | orchestrator | 2026-02-14 06:06:17.184059 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:06:17.184070 | orchestrator | Saturday 14 February 2026 06:06:06 +0000 (0:00:00.820) 0:29:18.968 ***** 2026-02-14 06:06:17.184080 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184091 | orchestrator | 2026-02-14 06:06:17.184102 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:06:17.184112 | orchestrator | Saturday 14 February 2026 06:06:07 +0000 (0:00:00.844) 0:29:19.813 ***** 2026-02-14 06:06:17.184123 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184134 | orchestrator | 2026-02-14 06:06:17.184145 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:06:17.184155 | orchestrator | Saturday 14 February 2026 06:06:08 +0000 (0:00:00.908) 0:29:20.722 ***** 2026-02-14 06:06:17.184166 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184177 | orchestrator | 2026-02-14 06:06:17.184187 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:06:17.184198 | orchestrator | Saturday 14 February 2026 06:06:09 +0000 (0:00:00.770) 0:29:21.493 ***** 2026-02-14 06:06:17.184209 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184220 | orchestrator | 2026-02-14 06:06:17.184230 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:06:17.184241 | orchestrator | Saturday 14 February 2026 06:06:10 +0000 (0:00:00.915) 0:29:22.408 ***** 2026-02-14 06:06:17.184252 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184262 | orchestrator | 2026-02-14 06:06:17.184273 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:06:17.184283 | orchestrator | Saturday 14 February 2026 06:06:10 +0000 (0:00:00.786) 0:29:23.194 ***** 2026-02-14 06:06:17.184294 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184305 | orchestrator | 2026-02-14 06:06:17.184316 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:06:17.184335 | orchestrator | Saturday 14 February 2026 06:06:11 +0000 (0:00:00.792) 0:29:23.987 ***** 2026-02-14 06:06:17.184346 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184357 | orchestrator | 2026-02-14 06:06:17.184368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:06:17.184379 | orchestrator | Saturday 14 February 2026 06:06:12 +0000 (0:00:00.773) 0:29:24.761 ***** 2026-02-14 06:06:17.184389 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184400 | orchestrator | 2026-02-14 06:06:17.184411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:06:17.184422 | orchestrator | Saturday 14 February 2026 06:06:13 +0000 (0:00:00.771) 0:29:25.532 ***** 2026-02-14 06:06:17.184433 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184443 | orchestrator | 2026-02-14 06:06:17.184454 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:06:17.184465 | orchestrator | Saturday 14 February 2026 06:06:14 +0000 (0:00:00.857) 0:29:26.389 ***** 2026-02-14 06:06:17.184476 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184487 | orchestrator | 2026-02-14 06:06:17.184498 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:06:17.184508 | orchestrator | Saturday 14 February 2026 06:06:14 +0000 (0:00:00.817) 0:29:27.207 ***** 2026-02-14 06:06:17.184519 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 06:06:17.184530 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 06:06:17.184541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 06:06:17.184551 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:06:17.184562 | orchestrator | 2026-02-14 06:06:17.184573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:06:17.184589 | orchestrator | Saturday 14 February 2026 06:06:16 +0000 (0:00:01.202) 0:29:28.409 ***** 2026-02-14 06:06:17.184601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 06:06:17.184619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 06:07:16.389976 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 06:07:16.390157 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.390175 | orchestrator | 2026-02-14 06:07:16.390187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:07:16.390200 | orchestrator | Saturday 14 February 2026 06:06:17 +0000 (0:00:01.087) 0:29:29.497 ***** 2026-02-14 06:07:16.390210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-14 06:07:16.390252 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-14 06:07:16.390263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-14 06:07:16.390273 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.390283 | orchestrator | 2026-02-14 06:07:16.390293 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:07:16.390304 | orchestrator | Saturday 14 February 2026 06:06:18 +0000 (0:00:01.109) 0:29:30.607 ***** 2026-02-14 06:07:16.390314 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.390323 | orchestrator | 2026-02-14 06:07:16.390333 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:07:16.390343 | orchestrator | Saturday 14 February 2026 06:06:19 +0000 (0:00:00.806) 0:29:31.413 ***** 2026-02-14 06:07:16.390353 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-14 06:07:16.390363 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.390372 | orchestrator | 2026-02-14 06:07:16.390383 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:07:16.390393 | orchestrator | Saturday 14 February 2026 06:06:20 +0000 (0:00:00.918) 0:29:32.331 ***** 2026-02-14 06:07:16.390402 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.390413 | orchestrator | 2026-02-14 06:07:16.390422 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:07:16.390432 | orchestrator | Saturday 14 February 2026 06:06:21 +0000 (0:00:01.434) 0:29:33.766 ***** 2026-02-14 06:07:16.390466 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:07:16.390479 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-14 06:07:16.390491 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:07:16.390503 | orchestrator | 2026-02-14 06:07:16.390514 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 06:07:16.390526 | orchestrator | Saturday 14 February 2026 06:06:23 +0000 (0:00:01.715) 0:29:35.481 ***** 2026-02-14 06:07:16.390536 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-14 06:07:16.390548 | orchestrator | 2026-02-14 06:07:16.390559 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-14 06:07:16.390570 | orchestrator | Saturday 14 February 2026 06:06:24 +0000 (0:00:01.122) 0:29:36.603 ***** 2026-02-14 06:07:16.390581 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.390594 | orchestrator | 2026-02-14 06:07:16.390605 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-14 06:07:16.390616 | orchestrator | Saturday 14 February 2026 06:06:25 +0000 (0:00:01.655) 0:29:38.258 ***** 2026-02-14 06:07:16.390627 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.390638 | orchestrator | 2026-02-14 06:07:16.390649 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-14 06:07:16.390660 | orchestrator | Saturday 14 February 2026 06:06:27 +0000 (0:00:01.222) 0:29:39.481 ***** 2026-02-14 06:07:16.390671 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:07:16.390686 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:07:16.390702 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:07:16.390718 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-14 06:07:16.390735 | orchestrator | 2026-02-14 06:07:16.390750 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-14 06:07:16.390767 | orchestrator | Saturday 14 February 2026 06:06:34 +0000 (0:00:07.163) 0:29:46.644 ***** 2026-02-14 06:07:16.390782 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.390799 | orchestrator | 2026-02-14 06:07:16.390814 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-14 06:07:16.390828 | orchestrator | Saturday 14 February 2026 06:06:35 +0000 (0:00:01.234) 0:29:47.879 ***** 2026-02-14 06:07:16.390844 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 06:07:16.390861 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-14 06:07:16.390925 | orchestrator | 2026-02-14 06:07:16.390944 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:07:16.390962 | orchestrator | Saturday 14 February 2026 06:06:38 +0000 (0:00:03.223) 0:29:51.102 ***** 2026-02-14 06:07:16.390978 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-14 06:07:16.390995 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-14 06:07:16.391011 | orchestrator | 2026-02-14 06:07:16.391028 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-14 06:07:16.391046 | orchestrator | Saturday 14 February 2026 06:06:40 +0000 (0:00:02.216) 0:29:53.318 ***** 2026-02-14 06:07:16.391064 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.391081 | orchestrator | 2026-02-14 06:07:16.391100 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-14 06:07:16.391119 | orchestrator | Saturday 14 February 2026 06:06:42 +0000 (0:00:01.588) 0:29:54.907 ***** 2026-02-14 06:07:16.391137 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.391155 | orchestrator | 2026-02-14 06:07:16.391173 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 06:07:16.391209 | orchestrator | Saturday 14 February 2026 06:06:43 +0000 (0:00:00.860) 0:29:55.768 ***** 2026-02-14 06:07:16.391246 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.391266 | orchestrator | 2026-02-14 06:07:16.391284 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 06:07:16.391330 | orchestrator | Saturday 14 February 2026 06:06:44 +0000 (0:00:00.846) 0:29:56.614 ***** 2026-02-14 06:07:16.391351 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-14 06:07:16.391369 | orchestrator | 2026-02-14 06:07:16.391387 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-14 06:07:16.391403 | orchestrator | Saturday 14 February 2026 06:06:45 +0000 (0:00:01.160) 0:29:57.775 ***** 2026-02-14 06:07:16.391422 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.391439 | orchestrator | 2026-02-14 06:07:16.391457 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-14 06:07:16.391476 | orchestrator | Saturday 14 February 2026 06:06:46 +0000 (0:00:01.145) 0:29:58.921 ***** 2026-02-14 06:07:16.391493 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.391511 | orchestrator | 2026-02-14 06:07:16.391528 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-14 06:07:16.391547 | orchestrator | Saturday 14 February 2026 06:06:47 +0000 (0:00:01.236) 0:30:00.158 ***** 2026-02-14 06:07:16.391565 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-14 06:07:16.391583 | orchestrator | 2026-02-14 06:07:16.391600 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-14 06:07:16.391617 | orchestrator | Saturday 14 February 2026 06:06:49 +0000 (0:00:01.408) 0:30:01.567 ***** 2026-02-14 06:07:16.391635 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.391653 | orchestrator | 2026-02-14 06:07:16.391670 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-14 06:07:16.391689 | orchestrator | Saturday 14 February 2026 06:06:51 +0000 (0:00:02.167) 0:30:03.734 ***** 2026-02-14 06:07:16.391708 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.391726 | orchestrator | 2026-02-14 06:07:16.391744 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-14 06:07:16.391759 | orchestrator | Saturday 14 February 2026 06:06:53 +0000 (0:00:02.024) 0:30:05.759 ***** 2026-02-14 06:07:16.391770 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:07:16.391781 | orchestrator | 2026-02-14 06:07:16.391792 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-14 06:07:16.391803 | orchestrator | Saturday 14 February 2026 06:06:55 +0000 (0:00:02.511) 0:30:08.270 ***** 2026-02-14 06:07:16.391814 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:07:16.391824 | orchestrator | 2026-02-14 06:07:16.391835 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 06:07:16.391846 | orchestrator | Saturday 14 February 2026 06:06:59 +0000 (0:00:03.545) 0:30:11.815 ***** 2026-02-14 06:07:16.391857 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:07:16.391868 | orchestrator | 2026-02-14 06:07:16.391925 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-14 06:07:16.391937 | orchestrator | 2026-02-14 06:07:16.391948 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-14 06:07:16.391959 | orchestrator | Saturday 14 February 2026 06:07:00 +0000 (0:00:01.085) 0:30:12.900 ***** 2026-02-14 06:07:16.391970 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:07:16.391981 | orchestrator | 2026-02-14 06:07:16.391992 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-14 06:07:16.392003 | orchestrator | Saturday 14 February 2026 06:07:03 +0000 (0:00:02.500) 0:30:15.401 ***** 2026-02-14 06:07:16.392013 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:07:16.392024 | orchestrator | 2026-02-14 06:07:16.392035 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:07:16.392046 | orchestrator | Saturday 14 February 2026 06:07:05 +0000 (0:00:02.050) 0:30:17.451 ***** 2026-02-14 06:07:16.392057 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-14 06:07:16.392080 | orchestrator | 2026-02-14 06:07:16.392091 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:07:16.392102 | orchestrator | Saturday 14 February 2026 06:07:06 +0000 (0:00:01.207) 0:30:18.659 ***** 2026-02-14 06:07:16.392113 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392124 | orchestrator | 2026-02-14 06:07:16.392135 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:07:16.392146 | orchestrator | Saturday 14 February 2026 06:07:07 +0000 (0:00:01.556) 0:30:20.215 ***** 2026-02-14 06:07:16.392157 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392167 | orchestrator | 2026-02-14 06:07:16.392178 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:07:16.392189 | orchestrator | Saturday 14 February 2026 06:07:09 +0000 (0:00:01.173) 0:30:21.389 ***** 2026-02-14 06:07:16.392200 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392210 | orchestrator | 2026-02-14 06:07:16.392221 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:07:16.392232 | orchestrator | Saturday 14 February 2026 06:07:10 +0000 (0:00:01.517) 0:30:22.907 ***** 2026-02-14 06:07:16.392243 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392254 | orchestrator | 2026-02-14 06:07:16.392265 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:07:16.392275 | orchestrator | Saturday 14 February 2026 06:07:11 +0000 (0:00:01.146) 0:30:24.054 ***** 2026-02-14 06:07:16.392286 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392297 | orchestrator | 2026-02-14 06:07:16.392307 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:07:16.392318 | orchestrator | Saturday 14 February 2026 06:07:12 +0000 (0:00:01.145) 0:30:25.199 ***** 2026-02-14 06:07:16.392329 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392340 | orchestrator | 2026-02-14 06:07:16.392351 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:07:16.392362 | orchestrator | Saturday 14 February 2026 06:07:14 +0000 (0:00:01.201) 0:30:26.400 ***** 2026-02-14 06:07:16.392372 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:16.392383 | orchestrator | 2026-02-14 06:07:16.392403 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:07:16.392414 | orchestrator | Saturday 14 February 2026 06:07:15 +0000 (0:00:01.186) 0:30:27.586 ***** 2026-02-14 06:07:16.392425 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:16.392436 | orchestrator | 2026-02-14 06:07:16.392459 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:07:42.609307 | orchestrator | Saturday 14 February 2026 06:07:16 +0000 (0:00:01.116) 0:30:28.703 ***** 2026-02-14 06:07:42.609453 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:07:42.609482 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:07:42.609504 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:07:42.609524 | orchestrator | 2026-02-14 06:07:42.609541 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:07:42.609552 | orchestrator | Saturday 14 February 2026 06:07:18 +0000 (0:00:02.050) 0:30:30.754 ***** 2026-02-14 06:07:42.609564 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:42.609575 | orchestrator | 2026-02-14 06:07:42.609586 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:07:42.609597 | orchestrator | Saturday 14 February 2026 06:07:19 +0000 (0:00:01.251) 0:30:32.006 ***** 2026-02-14 06:07:42.609609 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:07:42.609619 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:07:42.609630 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:07:42.609641 | orchestrator | 2026-02-14 06:07:42.609652 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:07:42.609690 | orchestrator | Saturday 14 February 2026 06:07:22 +0000 (0:00:03.291) 0:30:35.298 ***** 2026-02-14 06:07:42.609703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 06:07:42.609714 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 06:07:42.609725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 06:07:42.609736 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.609747 | orchestrator | 2026-02-14 06:07:42.609757 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:07:42.609768 | orchestrator | Saturday 14 February 2026 06:07:24 +0000 (0:00:01.849) 0:30:37.147 ***** 2026-02-14 06:07:42.609780 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609794 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609816 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.609829 | orchestrator | 2026-02-14 06:07:42.609842 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:07:42.609856 | orchestrator | Saturday 14 February 2026 06:07:26 +0000 (0:00:02.173) 0:30:39.321 ***** 2026-02-14 06:07:42.609905 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609922 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609936 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:42.609950 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.609962 | orchestrator | 2026-02-14 06:07:42.609975 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:07:42.610004 | orchestrator | Saturday 14 February 2026 06:07:28 +0000 (0:00:01.314) 0:30:40.635 ***** 2026-02-14 06:07:42.610101 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:07:20.242276', 'end': '2026-02-14 06:07:20.280975', 'delta': '0:00:00.038699', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:07:42.610131 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:07:21.190692', 'end': '2026-02-14 06:07:21.235837', 'delta': '0:00:00.045145', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:07:42.610145 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:07:21.730382', 'end': '2026-02-14 06:07:21.777662', 'delta': '0:00:00.047280', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:07:42.610157 | orchestrator | 2026-02-14 06:07:42.610170 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:07:42.610183 | orchestrator | Saturday 14 February 2026 06:07:29 +0000 (0:00:01.219) 0:30:41.855 ***** 2026-02-14 06:07:42.610195 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:42.610206 | orchestrator | 2026-02-14 06:07:42.610216 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:07:42.610227 | orchestrator | Saturday 14 February 2026 06:07:30 +0000 (0:00:01.313) 0:30:43.168 ***** 2026-02-14 06:07:42.610238 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610249 | orchestrator | 2026-02-14 06:07:42.610259 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:07:42.610270 | orchestrator | Saturday 14 February 2026 06:07:32 +0000 (0:00:01.309) 0:30:44.477 ***** 2026-02-14 06:07:42.610281 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:42.610292 | orchestrator | 2026-02-14 06:07:42.610303 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:07:42.610314 | orchestrator | Saturday 14 February 2026 06:07:33 +0000 (0:00:01.141) 0:30:45.619 ***** 2026-02-14 06:07:42.610324 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:07:42.610335 | orchestrator | 2026-02-14 06:07:42.610346 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:07:42.610357 | orchestrator | Saturday 14 February 2026 06:07:35 +0000 (0:00:02.194) 0:30:47.813 ***** 2026-02-14 06:07:42.610368 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:42.610379 | orchestrator | 2026-02-14 06:07:42.610390 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:07:42.610401 | orchestrator | Saturday 14 February 2026 06:07:36 +0000 (0:00:01.164) 0:30:48.978 ***** 2026-02-14 06:07:42.610411 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610422 | orchestrator | 2026-02-14 06:07:42.610433 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:07:42.610444 | orchestrator | Saturday 14 February 2026 06:07:37 +0000 (0:00:01.155) 0:30:50.134 ***** 2026-02-14 06:07:42.610454 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610465 | orchestrator | 2026-02-14 06:07:42.610484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:07:42.610495 | orchestrator | Saturday 14 February 2026 06:07:39 +0000 (0:00:01.316) 0:30:51.451 ***** 2026-02-14 06:07:42.610505 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610516 | orchestrator | 2026-02-14 06:07:42.610527 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:07:42.610538 | orchestrator | Saturday 14 February 2026 06:07:40 +0000 (0:00:01.196) 0:30:52.648 ***** 2026-02-14 06:07:42.610549 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610559 | orchestrator | 2026-02-14 06:07:42.610576 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:07:42.610587 | orchestrator | Saturday 14 February 2026 06:07:41 +0000 (0:00:01.149) 0:30:53.797 ***** 2026-02-14 06:07:42.610598 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:42.610609 | orchestrator | 2026-02-14 06:07:42.610627 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:07:49.847356 | orchestrator | Saturday 14 February 2026 06:07:42 +0000 (0:00:01.125) 0:30:54.923 ***** 2026-02-14 06:07:49.847579 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:49.847606 | orchestrator | 2026-02-14 06:07:49.847619 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:07:49.847643 | orchestrator | Saturday 14 February 2026 06:07:43 +0000 (0:00:01.134) 0:30:56.058 ***** 2026-02-14 06:07:49.847655 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:49.847666 | orchestrator | 2026-02-14 06:07:49.847678 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:07:49.847689 | orchestrator | Saturday 14 February 2026 06:07:44 +0000 (0:00:01.168) 0:30:57.227 ***** 2026-02-14 06:07:49.847700 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:49.847711 | orchestrator | 2026-02-14 06:07:49.847722 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:07:49.847734 | orchestrator | Saturday 14 February 2026 06:07:46 +0000 (0:00:01.209) 0:30:58.437 ***** 2026-02-14 06:07:49.847745 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:49.847756 | orchestrator | 2026-02-14 06:07:49.847767 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:07:49.847778 | orchestrator | Saturday 14 February 2026 06:07:47 +0000 (0:00:01.121) 0:30:59.559 ***** 2026-02-14 06:07:49.847791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:07:49.847869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.847995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:07:49.848012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.848038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:07:49.848051 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:49.848064 | orchestrator | 2026-02-14 06:07:49.848078 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:07:49.848090 | orchestrator | Saturday 14 February 2026 06:07:48 +0000 (0:00:01.329) 0:31:00.888 ***** 2026-02-14 06:07:49.848105 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:49.848135 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941097 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941214 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-07-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941272 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941285 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941338 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b284434b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1', 'scsi-SQEMU_QEMU_HARDDISK_b284434b-c033-46cd-9dae-de97b39c2172-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941354 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941375 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:07:57.941388 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:57.941401 | orchestrator | 2026-02-14 06:07:57.941413 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:07:57.941425 | orchestrator | Saturday 14 February 2026 06:07:49 +0000 (0:00:01.280) 0:31:02.169 ***** 2026-02-14 06:07:57.941437 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:57.941448 | orchestrator | 2026-02-14 06:07:57.941460 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:07:57.941471 | orchestrator | Saturday 14 February 2026 06:07:51 +0000 (0:00:01.836) 0:31:04.005 ***** 2026-02-14 06:07:57.941482 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:57.941493 | orchestrator | 2026-02-14 06:07:57.941504 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:07:57.941515 | orchestrator | Saturday 14 February 2026 06:07:52 +0000 (0:00:01.165) 0:31:05.171 ***** 2026-02-14 06:07:57.941529 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:07:57.941542 | orchestrator | 2026-02-14 06:07:57.941555 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:07:57.941568 | orchestrator | Saturday 14 February 2026 06:07:54 +0000 (0:00:01.492) 0:31:06.663 ***** 2026-02-14 06:07:57.941580 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:57.941593 | orchestrator | 2026-02-14 06:07:57.941612 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:07:57.941625 | orchestrator | Saturday 14 February 2026 06:07:55 +0000 (0:00:01.217) 0:31:07.881 ***** 2026-02-14 06:07:57.941638 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:57.941651 | orchestrator | 2026-02-14 06:07:57.941664 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:07:57.941678 | orchestrator | Saturday 14 February 2026 06:07:56 +0000 (0:00:01.236) 0:31:09.117 ***** 2026-02-14 06:07:57.941692 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:07:57.941705 | orchestrator | 2026-02-14 06:07:57.941716 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:07:57.941735 | orchestrator | Saturday 14 February 2026 06:07:57 +0000 (0:00:01.141) 0:31:10.259 ***** 2026-02-14 06:08:36.169298 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-14 06:08:36.169415 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-14 06:08:36.169430 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:08:36.169442 | orchestrator | 2026-02-14 06:08:36.169454 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:08:36.169467 | orchestrator | Saturday 14 February 2026 06:07:59 +0000 (0:00:02.060) 0:31:12.319 ***** 2026-02-14 06:08:36.169478 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-14 06:08:36.169490 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-14 06:08:36.169500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-14 06:08:36.169536 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.169548 | orchestrator | 2026-02-14 06:08:36.169559 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:08:36.169570 | orchestrator | Saturday 14 February 2026 06:08:01 +0000 (0:00:01.161) 0:31:13.481 ***** 2026-02-14 06:08:36.169581 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.169592 | orchestrator | 2026-02-14 06:08:36.169603 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:08:36.169614 | orchestrator | Saturday 14 February 2026 06:08:02 +0000 (0:00:01.163) 0:31:14.645 ***** 2026-02-14 06:08:36.169625 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:08:36.169636 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:08:36.169647 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:08:36.169657 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:08:36.169668 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:08:36.169678 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:08:36.169689 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:08:36.169699 | orchestrator | 2026-02-14 06:08:36.169710 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:08:36.169721 | orchestrator | Saturday 14 February 2026 06:08:04 +0000 (0:00:02.228) 0:31:16.873 ***** 2026-02-14 06:08:36.169731 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:08:36.169742 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:08:36.169753 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:08:36.169764 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:08:36.169774 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:08:36.169785 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:08:36.169796 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:08:36.169806 | orchestrator | 2026-02-14 06:08:36.169817 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:08:36.169828 | orchestrator | Saturday 14 February 2026 06:08:06 +0000 (0:00:02.318) 0:31:19.192 ***** 2026-02-14 06:08:36.169838 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-14 06:08:36.169850 | orchestrator | 2026-02-14 06:08:36.169861 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:08:36.169908 | orchestrator | Saturday 14 February 2026 06:08:08 +0000 (0:00:01.180) 0:31:20.372 ***** 2026-02-14 06:08:36.169920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-14 06:08:36.169930 | orchestrator | 2026-02-14 06:08:36.169941 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:08:36.169952 | orchestrator | Saturday 14 February 2026 06:08:09 +0000 (0:00:01.154) 0:31:21.527 ***** 2026-02-14 06:08:36.169962 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.169974 | orchestrator | 2026-02-14 06:08:36.169985 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:08:36.169995 | orchestrator | Saturday 14 February 2026 06:08:10 +0000 (0:00:01.579) 0:31:23.106 ***** 2026-02-14 06:08:36.170006 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170076 | orchestrator | 2026-02-14 06:08:36.170091 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:08:36.170102 | orchestrator | Saturday 14 February 2026 06:08:11 +0000 (0:00:01.148) 0:31:24.255 ***** 2026-02-14 06:08:36.170122 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170133 | orchestrator | 2026-02-14 06:08:36.170144 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:08:36.170171 | orchestrator | Saturday 14 February 2026 06:08:13 +0000 (0:00:01.135) 0:31:25.391 ***** 2026-02-14 06:08:36.170182 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170193 | orchestrator | 2026-02-14 06:08:36.170204 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:08:36.170214 | orchestrator | Saturday 14 February 2026 06:08:14 +0000 (0:00:01.107) 0:31:26.498 ***** 2026-02-14 06:08:36.170225 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170236 | orchestrator | 2026-02-14 06:08:36.170247 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:08:36.170258 | orchestrator | Saturday 14 February 2026 06:08:15 +0000 (0:00:01.545) 0:31:28.043 ***** 2026-02-14 06:08:36.170269 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170279 | orchestrator | 2026-02-14 06:08:36.170290 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:08:36.170319 | orchestrator | Saturday 14 February 2026 06:08:16 +0000 (0:00:01.135) 0:31:29.179 ***** 2026-02-14 06:08:36.170331 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170342 | orchestrator | 2026-02-14 06:08:36.170353 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:08:36.170364 | orchestrator | Saturday 14 February 2026 06:08:18 +0000 (0:00:01.239) 0:31:30.419 ***** 2026-02-14 06:08:36.170375 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170385 | orchestrator | 2026-02-14 06:08:36.170396 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:08:36.170407 | orchestrator | Saturday 14 February 2026 06:08:19 +0000 (0:00:01.594) 0:31:32.013 ***** 2026-02-14 06:08:36.170418 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170428 | orchestrator | 2026-02-14 06:08:36.170439 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:08:36.170450 | orchestrator | Saturday 14 February 2026 06:08:21 +0000 (0:00:01.612) 0:31:33.625 ***** 2026-02-14 06:08:36.170460 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170471 | orchestrator | 2026-02-14 06:08:36.170482 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:08:36.170493 | orchestrator | Saturday 14 February 2026 06:08:22 +0000 (0:00:00.799) 0:31:34.425 ***** 2026-02-14 06:08:36.170503 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170514 | orchestrator | 2026-02-14 06:08:36.170525 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:08:36.170536 | orchestrator | Saturday 14 February 2026 06:08:22 +0000 (0:00:00.813) 0:31:35.238 ***** 2026-02-14 06:08:36.170546 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170557 | orchestrator | 2026-02-14 06:08:36.170568 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:08:36.170579 | orchestrator | Saturday 14 February 2026 06:08:23 +0000 (0:00:00.828) 0:31:36.067 ***** 2026-02-14 06:08:36.170589 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170600 | orchestrator | 2026-02-14 06:08:36.170610 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:08:36.170621 | orchestrator | Saturday 14 February 2026 06:08:24 +0000 (0:00:00.801) 0:31:36.869 ***** 2026-02-14 06:08:36.170632 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170643 | orchestrator | 2026-02-14 06:08:36.170653 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:08:36.170664 | orchestrator | Saturday 14 February 2026 06:08:25 +0000 (0:00:00.832) 0:31:37.702 ***** 2026-02-14 06:08:36.170675 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170686 | orchestrator | 2026-02-14 06:08:36.170696 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:08:36.170707 | orchestrator | Saturday 14 February 2026 06:08:26 +0000 (0:00:00.777) 0:31:38.479 ***** 2026-02-14 06:08:36.170725 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170736 | orchestrator | 2026-02-14 06:08:36.170746 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:08:36.170757 | orchestrator | Saturday 14 February 2026 06:08:26 +0000 (0:00:00.826) 0:31:39.305 ***** 2026-02-14 06:08:36.170768 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170778 | orchestrator | 2026-02-14 06:08:36.170789 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:08:36.170800 | orchestrator | Saturday 14 February 2026 06:08:27 +0000 (0:00:00.802) 0:31:40.107 ***** 2026-02-14 06:08:36.170810 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170821 | orchestrator | 2026-02-14 06:08:36.170832 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:08:36.170843 | orchestrator | Saturday 14 February 2026 06:08:28 +0000 (0:00:00.805) 0:31:40.913 ***** 2026-02-14 06:08:36.170853 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:08:36.170864 | orchestrator | 2026-02-14 06:08:36.170898 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:08:36.170909 | orchestrator | Saturday 14 February 2026 06:08:29 +0000 (0:00:01.060) 0:31:41.973 ***** 2026-02-14 06:08:36.170920 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170931 | orchestrator | 2026-02-14 06:08:36.170942 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:08:36.170952 | orchestrator | Saturday 14 February 2026 06:08:30 +0000 (0:00:00.799) 0:31:42.773 ***** 2026-02-14 06:08:36.170963 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.170974 | orchestrator | 2026-02-14 06:08:36.170984 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:08:36.170995 | orchestrator | Saturday 14 February 2026 06:08:31 +0000 (0:00:00.787) 0:31:43.561 ***** 2026-02-14 06:08:36.171006 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.171016 | orchestrator | 2026-02-14 06:08:36.171027 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:08:36.171038 | orchestrator | Saturday 14 February 2026 06:08:32 +0000 (0:00:00.809) 0:31:44.371 ***** 2026-02-14 06:08:36.171049 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.171059 | orchestrator | 2026-02-14 06:08:36.171070 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:08:36.171081 | orchestrator | Saturday 14 February 2026 06:08:32 +0000 (0:00:00.838) 0:31:45.209 ***** 2026-02-14 06:08:36.171092 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.171102 | orchestrator | 2026-02-14 06:08:36.171119 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:08:36.171129 | orchestrator | Saturday 14 February 2026 06:08:33 +0000 (0:00:00.795) 0:31:46.005 ***** 2026-02-14 06:08:36.171140 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.171151 | orchestrator | 2026-02-14 06:08:36.171162 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:08:36.171172 | orchestrator | Saturday 14 February 2026 06:08:34 +0000 (0:00:00.815) 0:31:46.820 ***** 2026-02-14 06:08:36.171183 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:08:36.171194 | orchestrator | 2026-02-14 06:08:36.171205 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:08:36.171216 | orchestrator | Saturday 14 February 2026 06:08:35 +0000 (0:00:00.852) 0:31:47.672 ***** 2026-02-14 06:08:36.171233 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.561156 | orchestrator | 2026-02-14 06:09:25.561303 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:09:25.561334 | orchestrator | Saturday 14 February 2026 06:08:36 +0000 (0:00:00.811) 0:31:48.484 ***** 2026-02-14 06:09:25.561356 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.561378 | orchestrator | 2026-02-14 06:09:25.561398 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:09:25.561417 | orchestrator | Saturday 14 February 2026 06:08:36 +0000 (0:00:00.819) 0:31:49.304 ***** 2026-02-14 06:09:25.561471 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.561493 | orchestrator | 2026-02-14 06:09:25.561514 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:09:25.561533 | orchestrator | Saturday 14 February 2026 06:08:37 +0000 (0:00:00.802) 0:31:50.106 ***** 2026-02-14 06:09:25.561551 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.561571 | orchestrator | 2026-02-14 06:09:25.561592 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:09:25.561611 | orchestrator | Saturday 14 February 2026 06:08:38 +0000 (0:00:00.863) 0:31:50.970 ***** 2026-02-14 06:09:25.561631 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.561651 | orchestrator | 2026-02-14 06:09:25.561672 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:09:25.561694 | orchestrator | Saturday 14 February 2026 06:08:39 +0000 (0:00:01.050) 0:31:52.021 ***** 2026-02-14 06:09:25.561717 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.561741 | orchestrator | 2026-02-14 06:09:25.561762 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:09:25.561785 | orchestrator | Saturday 14 February 2026 06:08:41 +0000 (0:00:01.619) 0:31:53.640 ***** 2026-02-14 06:09:25.561806 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.561829 | orchestrator | 2026-02-14 06:09:25.561851 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:09:25.561901 | orchestrator | Saturday 14 February 2026 06:08:43 +0000 (0:00:02.133) 0:31:55.774 ***** 2026-02-14 06:09:25.561923 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-14 06:09:25.561945 | orchestrator | 2026-02-14 06:09:25.561965 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:09:25.561985 | orchestrator | Saturday 14 February 2026 06:08:44 +0000 (0:00:01.178) 0:31:56.953 ***** 2026-02-14 06:09:25.562005 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562120 | orchestrator | 2026-02-14 06:09:25.562143 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:09:25.562163 | orchestrator | Saturday 14 February 2026 06:08:45 +0000 (0:00:01.180) 0:31:58.134 ***** 2026-02-14 06:09:25.562183 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562204 | orchestrator | 2026-02-14 06:09:25.562223 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:09:25.562242 | orchestrator | Saturday 14 February 2026 06:08:47 +0000 (0:00:01.264) 0:31:59.398 ***** 2026-02-14 06:09:25.562261 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:09:25.562279 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:09:25.562298 | orchestrator | 2026-02-14 06:09:25.562316 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:09:25.562335 | orchestrator | Saturday 14 February 2026 06:08:49 +0000 (0:00:02.000) 0:32:01.399 ***** 2026-02-14 06:09:25.562353 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.562370 | orchestrator | 2026-02-14 06:09:25.562388 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:09:25.562406 | orchestrator | Saturday 14 February 2026 06:08:50 +0000 (0:00:01.513) 0:32:02.912 ***** 2026-02-14 06:09:25.562424 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562443 | orchestrator | 2026-02-14 06:09:25.562461 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:09:25.562479 | orchestrator | Saturday 14 February 2026 06:08:51 +0000 (0:00:01.296) 0:32:04.209 ***** 2026-02-14 06:09:25.562497 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562515 | orchestrator | 2026-02-14 06:09:25.562533 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:09:25.562553 | orchestrator | Saturday 14 February 2026 06:08:52 +0000 (0:00:00.791) 0:32:05.000 ***** 2026-02-14 06:09:25.562572 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562612 | orchestrator | 2026-02-14 06:09:25.562633 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:09:25.562653 | orchestrator | Saturday 14 February 2026 06:08:53 +0000 (0:00:00.843) 0:32:05.844 ***** 2026-02-14 06:09:25.562673 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-14 06:09:25.562690 | orchestrator | 2026-02-14 06:09:25.562707 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:09:25.562726 | orchestrator | Saturday 14 February 2026 06:08:54 +0000 (0:00:01.196) 0:32:07.043 ***** 2026-02-14 06:09:25.562745 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.562763 | orchestrator | 2026-02-14 06:09:25.562800 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:09:25.562820 | orchestrator | Saturday 14 February 2026 06:08:56 +0000 (0:00:01.904) 0:32:08.947 ***** 2026-02-14 06:09:25.562838 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:09:25.562856 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:09:25.562906 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:09:25.562923 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.562933 | orchestrator | 2026-02-14 06:09:25.562944 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:09:25.562955 | orchestrator | Saturday 14 February 2026 06:08:57 +0000 (0:00:01.152) 0:32:10.100 ***** 2026-02-14 06:09:25.562990 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563002 | orchestrator | 2026-02-14 06:09:25.563013 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:09:25.563024 | orchestrator | Saturday 14 February 2026 06:08:58 +0000 (0:00:01.144) 0:32:11.244 ***** 2026-02-14 06:09:25.563033 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563043 | orchestrator | 2026-02-14 06:09:25.563052 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:09:25.563062 | orchestrator | Saturday 14 February 2026 06:09:00 +0000 (0:00:01.208) 0:32:12.453 ***** 2026-02-14 06:09:25.563071 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563081 | orchestrator | 2026-02-14 06:09:25.563090 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:09:25.563100 | orchestrator | Saturday 14 February 2026 06:09:01 +0000 (0:00:01.170) 0:32:13.623 ***** 2026-02-14 06:09:25.563109 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563119 | orchestrator | 2026-02-14 06:09:25.563128 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:09:25.563138 | orchestrator | Saturday 14 February 2026 06:09:02 +0000 (0:00:01.160) 0:32:14.784 ***** 2026-02-14 06:09:25.563147 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563157 | orchestrator | 2026-02-14 06:09:25.563166 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:09:25.563176 | orchestrator | Saturday 14 February 2026 06:09:03 +0000 (0:00:00.827) 0:32:15.612 ***** 2026-02-14 06:09:25.563185 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.563195 | orchestrator | 2026-02-14 06:09:25.563204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:09:25.563214 | orchestrator | Saturday 14 February 2026 06:09:05 +0000 (0:00:02.212) 0:32:17.825 ***** 2026-02-14 06:09:25.563223 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.563233 | orchestrator | 2026-02-14 06:09:25.563242 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:09:25.563252 | orchestrator | Saturday 14 February 2026 06:09:06 +0000 (0:00:00.817) 0:32:18.643 ***** 2026-02-14 06:09:25.563261 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-14 06:09:25.563271 | orchestrator | 2026-02-14 06:09:25.563280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:09:25.563299 | orchestrator | Saturday 14 February 2026 06:09:07 +0000 (0:00:01.185) 0:32:19.828 ***** 2026-02-14 06:09:25.563309 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563319 | orchestrator | 2026-02-14 06:09:25.563328 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:09:25.563338 | orchestrator | Saturday 14 February 2026 06:09:08 +0000 (0:00:01.266) 0:32:21.094 ***** 2026-02-14 06:09:25.563347 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563356 | orchestrator | 2026-02-14 06:09:25.563366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:09:25.563375 | orchestrator | Saturday 14 February 2026 06:09:09 +0000 (0:00:01.171) 0:32:22.266 ***** 2026-02-14 06:09:25.563384 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563394 | orchestrator | 2026-02-14 06:09:25.563403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:09:25.563413 | orchestrator | Saturday 14 February 2026 06:09:11 +0000 (0:00:01.341) 0:32:23.607 ***** 2026-02-14 06:09:25.563422 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563432 | orchestrator | 2026-02-14 06:09:25.563441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:09:25.563451 | orchestrator | Saturday 14 February 2026 06:09:12 +0000 (0:00:01.144) 0:32:24.752 ***** 2026-02-14 06:09:25.563460 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563470 | orchestrator | 2026-02-14 06:09:25.563479 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:09:25.563488 | orchestrator | Saturday 14 February 2026 06:09:13 +0000 (0:00:01.180) 0:32:25.932 ***** 2026-02-14 06:09:25.563498 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563507 | orchestrator | 2026-02-14 06:09:25.563517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:09:25.563526 | orchestrator | Saturday 14 February 2026 06:09:14 +0000 (0:00:01.158) 0:32:27.091 ***** 2026-02-14 06:09:25.563536 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563545 | orchestrator | 2026-02-14 06:09:25.563555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:09:25.563564 | orchestrator | Saturday 14 February 2026 06:09:15 +0000 (0:00:01.224) 0:32:28.315 ***** 2026-02-14 06:09:25.563573 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:09:25.563583 | orchestrator | 2026-02-14 06:09:25.563593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:09:25.563602 | orchestrator | Saturday 14 February 2026 06:09:17 +0000 (0:00:01.169) 0:32:29.484 ***** 2026-02-14 06:09:25.563611 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:09:25.563621 | orchestrator | 2026-02-14 06:09:25.563630 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:09:25.563640 | orchestrator | Saturday 14 February 2026 06:09:17 +0000 (0:00:00.815) 0:32:30.300 ***** 2026-02-14 06:09:25.563654 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-14 06:09:25.563664 | orchestrator | 2026-02-14 06:09:25.563674 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:09:25.563683 | orchestrator | Saturday 14 February 2026 06:09:19 +0000 (0:00:01.205) 0:32:31.505 ***** 2026-02-14 06:09:25.563693 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-14 06:09:25.563703 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-14 06:09:25.563712 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-14 06:09:25.563722 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-14 06:09:25.563731 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-14 06:09:25.563741 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-14 06:09:25.563756 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-14 06:10:03.610811 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:10:03.610958 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:10:03.610996 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:10:03.611008 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:10:03.611018 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:10:03.611027 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:10:03.611037 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:10:03.611046 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-14 06:10:03.611057 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-14 06:10:03.611066 | orchestrator | 2026-02-14 06:10:03.611077 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:10:03.611087 | orchestrator | Saturday 14 February 2026 06:09:25 +0000 (0:00:06.367) 0:32:37.872 ***** 2026-02-14 06:10:03.611096 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611121 | orchestrator | 2026-02-14 06:10:03.611142 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:10:03.611152 | orchestrator | Saturday 14 February 2026 06:09:26 +0000 (0:00:00.774) 0:32:38.646 ***** 2026-02-14 06:10:03.611161 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611171 | orchestrator | 2026-02-14 06:10:03.611180 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:10:03.611190 | orchestrator | Saturday 14 February 2026 06:09:27 +0000 (0:00:00.785) 0:32:39.432 ***** 2026-02-14 06:10:03.611199 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611209 | orchestrator | 2026-02-14 06:10:03.611218 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:10:03.611228 | orchestrator | Saturday 14 February 2026 06:09:28 +0000 (0:00:00.904) 0:32:40.337 ***** 2026-02-14 06:10:03.611237 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611247 | orchestrator | 2026-02-14 06:10:03.611256 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:10:03.611266 | orchestrator | Saturday 14 February 2026 06:09:28 +0000 (0:00:00.837) 0:32:41.174 ***** 2026-02-14 06:10:03.611275 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611285 | orchestrator | 2026-02-14 06:10:03.611295 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:10:03.611304 | orchestrator | Saturday 14 February 2026 06:09:29 +0000 (0:00:00.801) 0:32:41.976 ***** 2026-02-14 06:10:03.611314 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611323 | orchestrator | 2026-02-14 06:10:03.611333 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:10:03.611344 | orchestrator | Saturday 14 February 2026 06:09:30 +0000 (0:00:00.760) 0:32:42.737 ***** 2026-02-14 06:10:03.611356 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611368 | orchestrator | 2026-02-14 06:10:03.611379 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:10:03.611391 | orchestrator | Saturday 14 February 2026 06:09:31 +0000 (0:00:00.765) 0:32:43.503 ***** 2026-02-14 06:10:03.611402 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611413 | orchestrator | 2026-02-14 06:10:03.611424 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:10:03.611435 | orchestrator | Saturday 14 February 2026 06:09:31 +0000 (0:00:00.825) 0:32:44.329 ***** 2026-02-14 06:10:03.611446 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611457 | orchestrator | 2026-02-14 06:10:03.611467 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:10:03.611477 | orchestrator | Saturday 14 February 2026 06:09:32 +0000 (0:00:00.809) 0:32:45.138 ***** 2026-02-14 06:10:03.611486 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611496 | orchestrator | 2026-02-14 06:10:03.611505 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:10:03.611522 | orchestrator | Saturday 14 February 2026 06:09:33 +0000 (0:00:00.799) 0:32:45.938 ***** 2026-02-14 06:10:03.611532 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611542 | orchestrator | 2026-02-14 06:10:03.611551 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:10:03.611561 | orchestrator | Saturday 14 February 2026 06:09:34 +0000 (0:00:00.841) 0:32:46.779 ***** 2026-02-14 06:10:03.611570 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611580 | orchestrator | 2026-02-14 06:10:03.611589 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:10:03.611599 | orchestrator | Saturday 14 February 2026 06:09:35 +0000 (0:00:00.780) 0:32:47.560 ***** 2026-02-14 06:10:03.611608 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611618 | orchestrator | 2026-02-14 06:10:03.611627 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:10:03.611637 | orchestrator | Saturday 14 February 2026 06:09:36 +0000 (0:00:00.941) 0:32:48.501 ***** 2026-02-14 06:10:03.611646 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611655 | orchestrator | 2026-02-14 06:10:03.611680 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:10:03.611690 | orchestrator | Saturday 14 February 2026 06:09:37 +0000 (0:00:00.864) 0:32:49.366 ***** 2026-02-14 06:10:03.611700 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611709 | orchestrator | 2026-02-14 06:10:03.611719 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:10:03.611729 | orchestrator | Saturday 14 February 2026 06:09:38 +0000 (0:00:01.030) 0:32:50.396 ***** 2026-02-14 06:10:03.611738 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611748 | orchestrator | 2026-02-14 06:10:03.611757 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:10:03.611767 | orchestrator | Saturday 14 February 2026 06:09:39 +0000 (0:00:01.109) 0:32:51.506 ***** 2026-02-14 06:10:03.611794 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611805 | orchestrator | 2026-02-14 06:10:03.611815 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:10:03.611826 | orchestrator | Saturday 14 February 2026 06:09:40 +0000 (0:00:00.827) 0:32:52.333 ***** 2026-02-14 06:10:03.611835 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611845 | orchestrator | 2026-02-14 06:10:03.611854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:10:03.611864 | orchestrator | Saturday 14 February 2026 06:09:40 +0000 (0:00:00.858) 0:32:53.192 ***** 2026-02-14 06:10:03.611892 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611902 | orchestrator | 2026-02-14 06:10:03.611912 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:10:03.611921 | orchestrator | Saturday 14 February 2026 06:09:41 +0000 (0:00:00.851) 0:32:54.044 ***** 2026-02-14 06:10:03.611931 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611940 | orchestrator | 2026-02-14 06:10:03.611950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:10:03.611959 | orchestrator | Saturday 14 February 2026 06:09:42 +0000 (0:00:00.835) 0:32:54.880 ***** 2026-02-14 06:10:03.611969 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.611978 | orchestrator | 2026-02-14 06:10:03.611988 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:10:03.611997 | orchestrator | Saturday 14 February 2026 06:09:43 +0000 (0:00:00.792) 0:32:55.672 ***** 2026-02-14 06:10:03.612007 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:10:03.612016 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:10:03.612025 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:10:03.612035 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612045 | orchestrator | 2026-02-14 06:10:03.612054 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:10:03.612070 | orchestrator | Saturday 14 February 2026 06:09:44 +0000 (0:00:01.094) 0:32:56.767 ***** 2026-02-14 06:10:03.612080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:10:03.612090 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:10:03.612099 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:10:03.612109 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612118 | orchestrator | 2026-02-14 06:10:03.612127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:10:03.612137 | orchestrator | Saturday 14 February 2026 06:09:45 +0000 (0:00:01.131) 0:32:57.899 ***** 2026-02-14 06:10:03.612147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-14 06:10:03.612156 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-14 06:10:03.612165 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-14 06:10:03.612175 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612184 | orchestrator | 2026-02-14 06:10:03.612194 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:10:03.612203 | orchestrator | Saturday 14 February 2026 06:09:46 +0000 (0:00:01.075) 0:32:58.974 ***** 2026-02-14 06:10:03.612213 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612222 | orchestrator | 2026-02-14 06:10:03.612231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:10:03.612241 | orchestrator | Saturday 14 February 2026 06:09:47 +0000 (0:00:00.788) 0:32:59.763 ***** 2026-02-14 06:10:03.612251 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-14 06:10:03.612260 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612270 | orchestrator | 2026-02-14 06:10:03.612279 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:10:03.612289 | orchestrator | Saturday 14 February 2026 06:09:48 +0000 (0:00:00.921) 0:33:00.684 ***** 2026-02-14 06:10:03.612298 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:03.612308 | orchestrator | 2026-02-14 06:10:03.612317 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:10:03.612327 | orchestrator | Saturday 14 February 2026 06:09:49 +0000 (0:00:01.506) 0:33:02.191 ***** 2026-02-14 06:10:03.612337 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:10:03.612347 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:10:03.612356 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-14 06:10:03.612366 | orchestrator | 2026-02-14 06:10:03.612375 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-14 06:10:03.612385 | orchestrator | Saturday 14 February 2026 06:09:51 +0000 (0:00:01.988) 0:33:04.180 ***** 2026-02-14 06:10:03.612395 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-14 06:10:03.612404 | orchestrator | 2026-02-14 06:10:03.612414 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-14 06:10:03.612423 | orchestrator | Saturday 14 February 2026 06:09:53 +0000 (0:00:01.692) 0:33:05.872 ***** 2026-02-14 06:10:03.612437 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:03.612447 | orchestrator | 2026-02-14 06:10:03.612456 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-14 06:10:03.612466 | orchestrator | Saturday 14 February 2026 06:09:55 +0000 (0:00:01.512) 0:33:07.385 ***** 2026-02-14 06:10:03.612475 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:03.612485 | orchestrator | 2026-02-14 06:10:03.612494 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-14 06:10:03.612504 | orchestrator | Saturday 14 February 2026 06:09:56 +0000 (0:00:01.220) 0:33:08.606 ***** 2026-02-14 06:10:03.612513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:10:03.612529 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:10:03.612544 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:10:51.592364 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-14 06:10:51.592444 | orchestrator | 2026-02-14 06:10:51.592450 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-14 06:10:51.592455 | orchestrator | Saturday 14 February 2026 06:10:03 +0000 (0:00:07.312) 0:33:15.918 ***** 2026-02-14 06:10:51.592459 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592464 | orchestrator | 2026-02-14 06:10:51.592468 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-14 06:10:51.592473 | orchestrator | Saturday 14 February 2026 06:10:04 +0000 (0:00:01.157) 0:33:17.075 ***** 2026-02-14 06:10:51.592477 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 06:10:51.592481 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-14 06:10:51.592485 | orchestrator | 2026-02-14 06:10:51.592489 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:10:51.592493 | orchestrator | Saturday 14 February 2026 06:10:07 +0000 (0:00:03.187) 0:33:20.262 ***** 2026-02-14 06:10:51.592497 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-14 06:10:51.592501 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-14 06:10:51.592505 | orchestrator | 2026-02-14 06:10:51.592508 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-14 06:10:51.592512 | orchestrator | Saturday 14 February 2026 06:10:10 +0000 (0:00:02.082) 0:33:22.345 ***** 2026-02-14 06:10:51.592516 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592520 | orchestrator | 2026-02-14 06:10:51.592523 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-14 06:10:51.592527 | orchestrator | Saturday 14 February 2026 06:10:11 +0000 (0:00:01.479) 0:33:23.824 ***** 2026-02-14 06:10:51.592531 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592535 | orchestrator | 2026-02-14 06:10:51.592538 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-14 06:10:51.592542 | orchestrator | Saturday 14 February 2026 06:10:12 +0000 (0:00:00.799) 0:33:24.624 ***** 2026-02-14 06:10:51.592546 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592550 | orchestrator | 2026-02-14 06:10:51.592553 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-14 06:10:51.592557 | orchestrator | Saturday 14 February 2026 06:10:13 +0000 (0:00:00.959) 0:33:25.583 ***** 2026-02-14 06:10:51.592561 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-14 06:10:51.592565 | orchestrator | 2026-02-14 06:10:51.592569 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-14 06:10:51.592572 | orchestrator | Saturday 14 February 2026 06:10:14 +0000 (0:00:01.276) 0:33:26.860 ***** 2026-02-14 06:10:51.592576 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592580 | orchestrator | 2026-02-14 06:10:51.592583 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-14 06:10:51.592587 | orchestrator | Saturday 14 February 2026 06:10:15 +0000 (0:00:01.227) 0:33:28.087 ***** 2026-02-14 06:10:51.592591 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592594 | orchestrator | 2026-02-14 06:10:51.592598 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-14 06:10:51.592602 | orchestrator | Saturday 14 February 2026 06:10:16 +0000 (0:00:01.129) 0:33:29.216 ***** 2026-02-14 06:10:51.592605 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-14 06:10:51.592609 | orchestrator | 2026-02-14 06:10:51.592613 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-14 06:10:51.592616 | orchestrator | Saturday 14 February 2026 06:10:18 +0000 (0:00:01.169) 0:33:30.386 ***** 2026-02-14 06:10:51.592620 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592624 | orchestrator | 2026-02-14 06:10:51.592628 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-14 06:10:51.592646 | orchestrator | Saturday 14 February 2026 06:10:20 +0000 (0:00:02.074) 0:33:32.461 ***** 2026-02-14 06:10:51.592650 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592654 | orchestrator | 2026-02-14 06:10:51.592658 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-14 06:10:51.592661 | orchestrator | Saturday 14 February 2026 06:10:22 +0000 (0:00:01.937) 0:33:34.398 ***** 2026-02-14 06:10:51.592665 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592669 | orchestrator | 2026-02-14 06:10:51.592673 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-14 06:10:51.592676 | orchestrator | Saturday 14 February 2026 06:10:24 +0000 (0:00:02.465) 0:33:36.864 ***** 2026-02-14 06:10:51.592680 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:10:51.592684 | orchestrator | 2026-02-14 06:10:51.592687 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-14 06:10:51.592691 | orchestrator | Saturday 14 February 2026 06:10:28 +0000 (0:00:03.468) 0:33:40.332 ***** 2026-02-14 06:10:51.592695 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-14 06:10:51.592699 | orchestrator | 2026-02-14 06:10:51.592702 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-14 06:10:51.592715 | orchestrator | Saturday 14 February 2026 06:10:29 +0000 (0:00:01.512) 0:33:41.845 ***** 2026-02-14 06:10:51.592719 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:10:51.592723 | orchestrator | 2026-02-14 06:10:51.592727 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-14 06:10:51.592730 | orchestrator | Saturday 14 February 2026 06:10:32 +0000 (0:00:02.487) 0:33:44.332 ***** 2026-02-14 06:10:51.592734 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:10:51.592738 | orchestrator | 2026-02-14 06:10:51.592742 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-14 06:10:51.592745 | orchestrator | Saturday 14 February 2026 06:10:34 +0000 (0:00:02.714) 0:33:47.047 ***** 2026-02-14 06:10:51.592749 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592753 | orchestrator | 2026-02-14 06:10:51.592756 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-14 06:10:51.592770 | orchestrator | Saturday 14 February 2026 06:10:36 +0000 (0:00:02.121) 0:33:49.168 ***** 2026-02-14 06:10:51.592774 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:10:51.592778 | orchestrator | 2026-02-14 06:10:51.592781 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-14 06:10:51.592785 | orchestrator | Saturday 14 February 2026 06:10:38 +0000 (0:00:01.252) 0:33:50.420 ***** 2026-02-14 06:10:51.592789 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-14 06:10:51.592792 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-14 06:10:51.592796 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592800 | orchestrator | 2026-02-14 06:10:51.592804 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-14 06:10:51.592807 | orchestrator | Saturday 14 February 2026 06:10:39 +0000 (0:00:01.386) 0:33:51.807 ***** 2026-02-14 06:10:51.592811 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-14 06:10:51.592815 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-14 06:10:51.592818 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-14 06:10:51.592822 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-14 06:10:51.592826 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:10:51.592830 | orchestrator | 2026-02-14 06:10:51.592833 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-14 06:10:51.592837 | orchestrator | 2026-02-14 06:10:51.592841 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:10:51.592844 | orchestrator | Saturday 14 February 2026 06:10:41 +0000 (0:00:02.071) 0:33:53.879 ***** 2026-02-14 06:10:51.592853 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:10:51.592857 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:10:51.592861 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:10:51.592895 | orchestrator | 2026-02-14 06:10:51.592899 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:10:51.592903 | orchestrator | Saturday 14 February 2026 06:10:43 +0000 (0:00:01.638) 0:33:55.518 ***** 2026-02-14 06:10:51.592907 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:10:51.592910 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:10:51.592914 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:10:51.592918 | orchestrator | 2026-02-14 06:10:51.592921 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-14 06:10:51.592925 | orchestrator | Saturday 14 February 2026 06:10:44 +0000 (0:00:01.801) 0:33:57.319 ***** 2026-02-14 06:10:51.592929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:10:51.592934 | orchestrator | 2026-02-14 06:10:51.592938 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-14 06:10:51.592943 | orchestrator | Saturday 14 February 2026 06:10:48 +0000 (0:00:03.114) 0:34:00.433 ***** 2026-02-14 06:10:51.592947 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:10:51.592951 | orchestrator | 2026-02-14 06:10:51.592956 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-14 06:10:51.592960 | orchestrator | Saturday 14 February 2026 06:10:51 +0000 (0:00:02.911) 0:34:03.345 ***** 2026-02-14 06:10:51.592971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-14T03:34:25.029162+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '19', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:51.592983 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-14T03:35:36.339389+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.424941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-14T03:35:39.948853+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.425036 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-14T03:36:39.592782+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.425069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-14T03:36:45.469239+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.425078 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-14T03:36:51.516052+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '72', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.425101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-14T03:36:57.739251+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '183', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '72', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.954230 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-14T03:37:03.897346+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '74', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.954371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-14T03:37:15.955745+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '76', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '74', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.954411 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-14T03:38:00.012573+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '101', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 101, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.954440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-14T03:38:08.679908+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '109', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 109, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:10:52.954462 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-14T03:38:17.483559+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '193', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 193, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:12:30.149708 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-14T03:38:26.435967+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '127', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 127, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:12:30.149832 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-14T03:38:35.776267+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '136', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 136, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-14 06:12:30.149921 | orchestrator | 2026-02-14 06:12:30.149956 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-14 06:12:30.149970 | orchestrator | Saturday 14 February 2026 06:10:53 +0000 (0:00:02.950) 0:34:06.295 ***** 2026-02-14 06:12:30.149981 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:12:30.149992 | orchestrator | 2026-02-14 06:12:30.150003 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-14 06:12:30.150014 | orchestrator | Saturday 14 February 2026 06:10:57 +0000 (0:00:03.042) 0:34:09.338 ***** 2026-02-14 06:12:30.150087 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-14 06:12:30.150099 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-14 06:12:30.150111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-14 06:12:30.150122 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-14 06:12:30.150134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-14 06:12:30.150145 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-14 06:12:30.150156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-14 06:12:30.150166 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-14 06:12:30.150178 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-14 06:12:30.150189 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-14 06:12:30.150199 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-14 06:12:30.150220 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-14 06:12:30.150233 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-14 06:12:30.150254 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-14 06:12:30.150267 | orchestrator | 2026-02-14 06:12:30.150280 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-14 06:12:30.150292 | orchestrator | Saturday 14 February 2026 06:12:12 +0000 (0:01:15.657) 0:35:24.995 ***** 2026-02-14 06:12:30.150304 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-14 06:12:30.150316 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-14 06:12:30.150328 | orchestrator | 2026-02-14 06:12:30.150341 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-14 06:12:30.150354 | orchestrator | 2026-02-14 06:12:30.150365 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:12:30.150377 | orchestrator | Saturday 14 February 2026 06:12:18 +0000 (0:00:05.969) 0:35:30.965 ***** 2026-02-14 06:12:30.150389 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-14 06:12:30.150401 | orchestrator | 2026-02-14 06:12:30.150414 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:12:30.150426 | orchestrator | Saturday 14 February 2026 06:12:20 +0000 (0:00:01.389) 0:35:32.354 ***** 2026-02-14 06:12:30.150438 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150451 | orchestrator | 2026-02-14 06:12:30.150463 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:12:30.150476 | orchestrator | Saturday 14 February 2026 06:12:21 +0000 (0:00:01.492) 0:35:33.847 ***** 2026-02-14 06:12:30.150495 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150513 | orchestrator | 2026-02-14 06:12:30.150538 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:12:30.150565 | orchestrator | Saturday 14 February 2026 06:12:22 +0000 (0:00:01.195) 0:35:35.043 ***** 2026-02-14 06:12:30.150583 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150600 | orchestrator | 2026-02-14 06:12:30.150626 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:12:30.150646 | orchestrator | Saturday 14 February 2026 06:12:24 +0000 (0:00:01.473) 0:35:36.517 ***** 2026-02-14 06:12:30.150664 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150681 | orchestrator | 2026-02-14 06:12:30.150698 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:12:30.150714 | orchestrator | Saturday 14 February 2026 06:12:25 +0000 (0:00:01.213) 0:35:37.730 ***** 2026-02-14 06:12:30.150733 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150749 | orchestrator | 2026-02-14 06:12:30.150767 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:12:30.150785 | orchestrator | Saturday 14 February 2026 06:12:26 +0000 (0:00:01.186) 0:35:38.916 ***** 2026-02-14 06:12:30.150801 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150817 | orchestrator | 2026-02-14 06:12:30.150834 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:12:30.150850 | orchestrator | Saturday 14 February 2026 06:12:27 +0000 (0:00:01.186) 0:35:40.103 ***** 2026-02-14 06:12:30.150893 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:30.150911 | orchestrator | 2026-02-14 06:12:30.150927 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:12:30.150944 | orchestrator | Saturday 14 February 2026 06:12:28 +0000 (0:00:01.150) 0:35:41.253 ***** 2026-02-14 06:12:30.150961 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:30.150978 | orchestrator | 2026-02-14 06:12:30.151009 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:12:57.301114 | orchestrator | Saturday 14 February 2026 06:12:30 +0000 (0:00:01.211) 0:35:42.464 ***** 2026-02-14 06:12:57.301232 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:12:57.301248 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:12:57.301283 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:12:57.301295 | orchestrator | 2026-02-14 06:12:57.301307 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:12:57.301318 | orchestrator | Saturday 14 February 2026 06:12:32 +0000 (0:00:02.121) 0:35:44.586 ***** 2026-02-14 06:12:57.301329 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:57.301341 | orchestrator | 2026-02-14 06:12:57.301352 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:12:57.301363 | orchestrator | Saturday 14 February 2026 06:12:33 +0000 (0:00:01.300) 0:35:45.886 ***** 2026-02-14 06:12:57.301374 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:12:57.301384 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:12:57.301395 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:12:57.301405 | orchestrator | 2026-02-14 06:12:57.301416 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:12:57.301427 | orchestrator | Saturday 14 February 2026 06:12:37 +0000 (0:00:04.316) 0:35:50.203 ***** 2026-02-14 06:12:57.301438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 06:12:57.301449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 06:12:57.301460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 06:12:57.301471 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.301481 | orchestrator | 2026-02-14 06:12:57.301492 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:12:57.301502 | orchestrator | Saturday 14 February 2026 06:12:39 +0000 (0:00:01.925) 0:35:52.129 ***** 2026-02-14 06:12:57.301530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301567 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.301577 | orchestrator | 2026-02-14 06:12:57.301588 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:12:57.301599 | orchestrator | Saturday 14 February 2026 06:12:41 +0000 (0:00:02.179) 0:35:54.309 ***** 2026-02-14 06:12:57.301612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:12:57.301660 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.301673 | orchestrator | 2026-02-14 06:12:57.301686 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:12:57.301699 | orchestrator | Saturday 14 February 2026 06:12:43 +0000 (0:00:01.162) 0:35:55.471 ***** 2026-02-14 06:12:57.301733 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:12:34.086456', 'end': '2026-02-14 06:12:34.144532', 'delta': '0:00:00.058076', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:12:57.301749 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:12:35.055898', 'end': '2026-02-14 06:12:35.101553', 'delta': '0:00:00.045655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:12:57.301768 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:12:35.589092', 'end': '2026-02-14 06:12:36.634793', 'delta': '0:00:01.045701', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:12:57.301781 | orchestrator | 2026-02-14 06:12:57.301793 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:12:57.301806 | orchestrator | Saturday 14 February 2026 06:12:44 +0000 (0:00:01.245) 0:35:56.717 ***** 2026-02-14 06:12:57.301818 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:57.301830 | orchestrator | 2026-02-14 06:12:57.301842 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:12:57.301854 | orchestrator | Saturday 14 February 2026 06:12:45 +0000 (0:00:01.291) 0:35:58.009 ***** 2026-02-14 06:12:57.301897 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.301911 | orchestrator | 2026-02-14 06:12:57.301923 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:12:57.301935 | orchestrator | Saturday 14 February 2026 06:12:47 +0000 (0:00:01.323) 0:35:59.333 ***** 2026-02-14 06:12:57.301947 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:57.301959 | orchestrator | 2026-02-14 06:12:57.301972 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:12:57.301984 | orchestrator | Saturday 14 February 2026 06:12:48 +0000 (0:00:01.203) 0:36:00.536 ***** 2026-02-14 06:12:57.302003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:12:57.302014 | orchestrator | 2026-02-14 06:12:57.302090 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:12:57.302101 | orchestrator | Saturday 14 February 2026 06:12:50 +0000 (0:00:01.944) 0:36:02.480 ***** 2026-02-14 06:12:57.302112 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:12:57.302123 | orchestrator | 2026-02-14 06:12:57.302133 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:12:57.302144 | orchestrator | Saturday 14 February 2026 06:12:51 +0000 (0:00:01.210) 0:36:03.691 ***** 2026-02-14 06:12:57.302154 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.302165 | orchestrator | 2026-02-14 06:12:57.302176 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:12:57.302187 | orchestrator | Saturday 14 February 2026 06:12:52 +0000 (0:00:01.172) 0:36:04.864 ***** 2026-02-14 06:12:57.302197 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.302208 | orchestrator | 2026-02-14 06:12:57.302219 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:12:57.302230 | orchestrator | Saturday 14 February 2026 06:12:53 +0000 (0:00:01.219) 0:36:06.084 ***** 2026-02-14 06:12:57.302240 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.302251 | orchestrator | 2026-02-14 06:12:57.302262 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:12:57.302272 | orchestrator | Saturday 14 February 2026 06:12:54 +0000 (0:00:01.129) 0:36:07.213 ***** 2026-02-14 06:12:57.302283 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:12:57.302293 | orchestrator | 2026-02-14 06:12:57.302304 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:12:57.302315 | orchestrator | Saturday 14 February 2026 06:12:56 +0000 (0:00:01.173) 0:36:08.387 ***** 2026-02-14 06:12:57.302334 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:02.531346 | orchestrator | 2026-02-14 06:13:02.531460 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:13:02.531478 | orchestrator | Saturday 14 February 2026 06:12:57 +0000 (0:00:01.229) 0:36:09.616 ***** 2026-02-14 06:13:02.531491 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:02.531511 | orchestrator | 2026-02-14 06:13:02.531530 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:13:02.531549 | orchestrator | Saturday 14 February 2026 06:12:58 +0000 (0:00:01.415) 0:36:11.031 ***** 2026-02-14 06:13:02.531568 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:02.531588 | orchestrator | 2026-02-14 06:13:02.531607 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:13:02.531625 | orchestrator | Saturday 14 February 2026 06:12:59 +0000 (0:00:01.200) 0:36:12.231 ***** 2026-02-14 06:13:02.531642 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:02.531661 | orchestrator | 2026-02-14 06:13:02.531681 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:13:02.531700 | orchestrator | Saturday 14 February 2026 06:13:01 +0000 (0:00:01.174) 0:36:13.406 ***** 2026-02-14 06:13:02.531713 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:02.531724 | orchestrator | 2026-02-14 06:13:02.531735 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:13:02.531746 | orchestrator | Saturday 14 February 2026 06:13:02 +0000 (0:00:01.200) 0:36:14.606 ***** 2026-02-14 06:13:02.531759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:02.531792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}})  2026-02-14 06:13:02.531833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:13:02.531846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}})  2026-02-14 06:13:02.531858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:02.531924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:02.531938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:13:02.531950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:02.531977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:13:02.531989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:02.532001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}})  2026-02-14 06:13:02.532012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}})  2026-02-14 06:13:02.532033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:04.010413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:13:04.010541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:04.010560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:13:04.010574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:13:04.010588 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:04.010601 | orchestrator | 2026-02-14 06:13:04.010614 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:13:04.010627 | orchestrator | Saturday 14 February 2026 06:13:03 +0000 (0:00:01.466) 0:36:16.072 ***** 2026-02-14 06:13:04.010659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:04.010674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:04.010702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:04.010716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:04.010729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:04.010747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:05.308945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:44.847571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:13:44.847693 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.847711 | orchestrator | 2026-02-14 06:13:44.847723 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:13:44.847736 | orchestrator | Saturday 14 February 2026 06:13:05 +0000 (0:00:01.549) 0:36:17.622 ***** 2026-02-14 06:13:44.847747 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.847759 | orchestrator | 2026-02-14 06:13:44.847770 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:13:44.847781 | orchestrator | Saturday 14 February 2026 06:13:06 +0000 (0:00:01.557) 0:36:19.179 ***** 2026-02-14 06:13:44.847791 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.847802 | orchestrator | 2026-02-14 06:13:44.847813 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:13:44.847824 | orchestrator | Saturday 14 February 2026 06:13:08 +0000 (0:00:01.344) 0:36:20.524 ***** 2026-02-14 06:13:44.847835 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.847846 | orchestrator | 2026-02-14 06:13:44.847857 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:13:44.847923 | orchestrator | Saturday 14 February 2026 06:13:09 +0000 (0:00:01.516) 0:36:22.040 ***** 2026-02-14 06:13:44.847935 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.847946 | orchestrator | 2026-02-14 06:13:44.847956 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:13:44.847967 | orchestrator | Saturday 14 February 2026 06:13:10 +0000 (0:00:01.155) 0:36:23.195 ***** 2026-02-14 06:13:44.847978 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.847988 | orchestrator | 2026-02-14 06:13:44.847999 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:13:44.848010 | orchestrator | Saturday 14 February 2026 06:13:12 +0000 (0:00:01.247) 0:36:24.443 ***** 2026-02-14 06:13:44.848021 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848031 | orchestrator | 2026-02-14 06:13:44.848042 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:13:44.848053 | orchestrator | Saturday 14 February 2026 06:13:13 +0000 (0:00:01.152) 0:36:25.595 ***** 2026-02-14 06:13:44.848063 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 06:13:44.848074 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 06:13:44.848086 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 06:13:44.848098 | orchestrator | 2026-02-14 06:13:44.848110 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:13:44.848122 | orchestrator | Saturday 14 February 2026 06:13:15 +0000 (0:00:02.145) 0:36:27.741 ***** 2026-02-14 06:13:44.848134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 06:13:44.848147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 06:13:44.848188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 06:13:44.848201 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848213 | orchestrator | 2026-02-14 06:13:44.848225 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:13:44.848237 | orchestrator | Saturday 14 February 2026 06:13:16 +0000 (0:00:01.412) 0:36:29.153 ***** 2026-02-14 06:13:44.848249 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-14 06:13:44.848262 | orchestrator | 2026-02-14 06:13:44.848275 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:13:44.848288 | orchestrator | Saturday 14 February 2026 06:13:17 +0000 (0:00:01.123) 0:36:30.277 ***** 2026-02-14 06:13:44.848300 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848313 | orchestrator | 2026-02-14 06:13:44.848325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:13:44.848336 | orchestrator | Saturday 14 February 2026 06:13:19 +0000 (0:00:01.210) 0:36:31.488 ***** 2026-02-14 06:13:44.848346 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848357 | orchestrator | 2026-02-14 06:13:44.848368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:13:44.848378 | orchestrator | Saturday 14 February 2026 06:13:20 +0000 (0:00:01.156) 0:36:32.644 ***** 2026-02-14 06:13:44.848389 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848400 | orchestrator | 2026-02-14 06:13:44.848410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:13:44.848421 | orchestrator | Saturday 14 February 2026 06:13:21 +0000 (0:00:01.197) 0:36:33.842 ***** 2026-02-14 06:13:44.848431 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.848442 | orchestrator | 2026-02-14 06:13:44.848453 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:13:44.848463 | orchestrator | Saturday 14 February 2026 06:13:22 +0000 (0:00:01.309) 0:36:35.152 ***** 2026-02-14 06:13:44.848474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:13:44.848503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:13:44.848515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:13:44.848526 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848537 | orchestrator | 2026-02-14 06:13:44.848548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:13:44.848558 | orchestrator | Saturday 14 February 2026 06:13:24 +0000 (0:00:01.528) 0:36:36.680 ***** 2026-02-14 06:13:44.848569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:13:44.848580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:13:44.848591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:13:44.848601 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848612 | orchestrator | 2026-02-14 06:13:44.848622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:13:44.848640 | orchestrator | Saturday 14 February 2026 06:13:25 +0000 (0:00:01.458) 0:36:38.138 ***** 2026-02-14 06:13:44.848652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:13:44.848662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:13:44.848673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:13:44.848683 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:13:44.848694 | orchestrator | 2026-02-14 06:13:44.848705 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:13:44.848716 | orchestrator | Saturday 14 February 2026 06:13:27 +0000 (0:00:01.462) 0:36:39.601 ***** 2026-02-14 06:13:44.848726 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.848737 | orchestrator | 2026-02-14 06:13:44.848748 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:13:44.848767 | orchestrator | Saturday 14 February 2026 06:13:28 +0000 (0:00:01.237) 0:36:40.839 ***** 2026-02-14 06:13:44.848778 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:13:44.848789 | orchestrator | 2026-02-14 06:13:44.848799 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:13:44.848810 | orchestrator | Saturday 14 February 2026 06:13:29 +0000 (0:00:01.407) 0:36:42.247 ***** 2026-02-14 06:13:44.848821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:13:44.848832 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:13:44.848842 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:13:44.848853 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 06:13:44.848880 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:13:44.848892 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:13:44.848903 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:13:44.848914 | orchestrator | 2026-02-14 06:13:44.848924 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:13:44.848935 | orchestrator | Saturday 14 February 2026 06:13:32 +0000 (0:00:02.353) 0:36:44.600 ***** 2026-02-14 06:13:44.848946 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:13:44.848956 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:13:44.848967 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:13:44.848977 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 06:13:44.848988 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:13:44.848999 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:13:44.849010 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:13:44.849020 | orchestrator | 2026-02-14 06:13:44.849031 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-14 06:13:44.849041 | orchestrator | Saturday 14 February 2026 06:13:35 +0000 (0:00:03.178) 0:36:47.778 ***** 2026-02-14 06:13:44.849052 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.849063 | orchestrator | 2026-02-14 06:13:44.849073 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-14 06:13:44.849084 | orchestrator | Saturday 14 February 2026 06:13:36 +0000 (0:00:01.488) 0:36:49.267 ***** 2026-02-14 06:13:44.849095 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.849106 | orchestrator | 2026-02-14 06:13:44.849116 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-14 06:13:44.849126 | orchestrator | Saturday 14 February 2026 06:13:38 +0000 (0:00:01.201) 0:36:50.468 ***** 2026-02-14 06:13:44.849137 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:13:44.849148 | orchestrator | 2026-02-14 06:13:44.849158 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-14 06:13:44.849169 | orchestrator | Saturday 14 February 2026 06:13:39 +0000 (0:00:01.317) 0:36:51.785 ***** 2026-02-14 06:13:44.849180 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-14 06:13:44.849190 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-14 06:13:44.849201 | orchestrator | 2026-02-14 06:13:44.849212 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:13:44.849222 | orchestrator | Saturday 14 February 2026 06:13:43 +0000 (0:00:04.219) 0:36:56.005 ***** 2026-02-14 06:13:44.849233 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-14 06:13:44.849244 | orchestrator | 2026-02-14 06:13:44.849254 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:13:44.849277 | orchestrator | Saturday 14 February 2026 06:13:44 +0000 (0:00:01.156) 0:36:57.161 ***** 2026-02-14 06:14:36.627593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-14 06:14:36.627713 | orchestrator | 2026-02-14 06:14:36.627731 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:14:36.627744 | orchestrator | Saturday 14 February 2026 06:13:46 +0000 (0:00:01.203) 0:36:58.365 ***** 2026-02-14 06:14:36.627756 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.627768 | orchestrator | 2026-02-14 06:14:36.627779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:14:36.627790 | orchestrator | Saturday 14 February 2026 06:13:47 +0000 (0:00:01.242) 0:36:59.607 ***** 2026-02-14 06:14:36.627801 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.627813 | orchestrator | 2026-02-14 06:14:36.627824 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:14:36.627834 | orchestrator | Saturday 14 February 2026 06:13:48 +0000 (0:00:01.570) 0:37:01.178 ***** 2026-02-14 06:14:36.627905 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.627919 | orchestrator | 2026-02-14 06:14:36.627929 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:14:36.627940 | orchestrator | Saturday 14 February 2026 06:13:50 +0000 (0:00:01.550) 0:37:02.728 ***** 2026-02-14 06:14:36.627952 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.627963 | orchestrator | 2026-02-14 06:14:36.627974 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:14:36.627984 | orchestrator | Saturday 14 February 2026 06:13:51 +0000 (0:00:01.564) 0:37:04.293 ***** 2026-02-14 06:14:36.627995 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628006 | orchestrator | 2026-02-14 06:14:36.628017 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:14:36.628028 | orchestrator | Saturday 14 February 2026 06:13:53 +0000 (0:00:01.202) 0:37:05.496 ***** 2026-02-14 06:14:36.628038 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628049 | orchestrator | 2026-02-14 06:14:36.628060 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:14:36.628071 | orchestrator | Saturday 14 February 2026 06:13:54 +0000 (0:00:01.159) 0:37:06.656 ***** 2026-02-14 06:14:36.628081 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628092 | orchestrator | 2026-02-14 06:14:36.628103 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:14:36.628114 | orchestrator | Saturday 14 February 2026 06:13:55 +0000 (0:00:01.151) 0:37:07.807 ***** 2026-02-14 06:14:36.628127 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628140 | orchestrator | 2026-02-14 06:14:36.628152 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:14:36.628164 | orchestrator | Saturday 14 February 2026 06:13:57 +0000 (0:00:01.564) 0:37:09.371 ***** 2026-02-14 06:14:36.628176 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628189 | orchestrator | 2026-02-14 06:14:36.628201 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:14:36.628214 | orchestrator | Saturday 14 February 2026 06:13:58 +0000 (0:00:01.663) 0:37:11.035 ***** 2026-02-14 06:14:36.628226 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628239 | orchestrator | 2026-02-14 06:14:36.628251 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:14:36.628264 | orchestrator | Saturday 14 February 2026 06:13:59 +0000 (0:00:01.222) 0:37:12.258 ***** 2026-02-14 06:14:36.628277 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628289 | orchestrator | 2026-02-14 06:14:36.628301 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:14:36.628313 | orchestrator | Saturday 14 February 2026 06:14:01 +0000 (0:00:01.146) 0:37:13.404 ***** 2026-02-14 06:14:36.628327 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628340 | orchestrator | 2026-02-14 06:14:36.628376 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:14:36.628389 | orchestrator | Saturday 14 February 2026 06:14:02 +0000 (0:00:01.162) 0:37:14.566 ***** 2026-02-14 06:14:36.628402 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628413 | orchestrator | 2026-02-14 06:14:36.628425 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:14:36.628438 | orchestrator | Saturday 14 February 2026 06:14:03 +0000 (0:00:01.159) 0:37:15.725 ***** 2026-02-14 06:14:36.628450 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628463 | orchestrator | 2026-02-14 06:14:36.628475 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:14:36.628486 | orchestrator | Saturday 14 February 2026 06:14:04 +0000 (0:00:01.176) 0:37:16.902 ***** 2026-02-14 06:14:36.628497 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628508 | orchestrator | 2026-02-14 06:14:36.628519 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:14:36.628530 | orchestrator | Saturday 14 February 2026 06:14:05 +0000 (0:00:01.155) 0:37:18.058 ***** 2026-02-14 06:14:36.628540 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628552 | orchestrator | 2026-02-14 06:14:36.628563 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:14:36.628573 | orchestrator | Saturday 14 February 2026 06:14:06 +0000 (0:00:01.116) 0:37:19.174 ***** 2026-02-14 06:14:36.628584 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628595 | orchestrator | 2026-02-14 06:14:36.628606 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:14:36.628617 | orchestrator | Saturday 14 February 2026 06:14:08 +0000 (0:00:01.209) 0:37:20.384 ***** 2026-02-14 06:14:36.628627 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628638 | orchestrator | 2026-02-14 06:14:36.628649 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:14:36.628660 | orchestrator | Saturday 14 February 2026 06:14:09 +0000 (0:00:01.287) 0:37:21.672 ***** 2026-02-14 06:14:36.628671 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.628682 | orchestrator | 2026-02-14 06:14:36.628692 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:14:36.628703 | orchestrator | Saturday 14 February 2026 06:14:10 +0000 (0:00:01.188) 0:37:22.860 ***** 2026-02-14 06:14:36.628714 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628725 | orchestrator | 2026-02-14 06:14:36.628753 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:14:36.628765 | orchestrator | Saturday 14 February 2026 06:14:11 +0000 (0:00:01.164) 0:37:24.025 ***** 2026-02-14 06:14:36.628776 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628787 | orchestrator | 2026-02-14 06:14:36.628797 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:14:36.628808 | orchestrator | Saturday 14 February 2026 06:14:12 +0000 (0:00:01.105) 0:37:25.130 ***** 2026-02-14 06:14:36.628819 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628830 | orchestrator | 2026-02-14 06:14:36.628840 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:14:36.628851 | orchestrator | Saturday 14 February 2026 06:14:13 +0000 (0:00:01.164) 0:37:26.295 ***** 2026-02-14 06:14:36.628880 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628891 | orchestrator | 2026-02-14 06:14:36.628902 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:14:36.628918 | orchestrator | Saturday 14 February 2026 06:14:15 +0000 (0:00:01.209) 0:37:27.504 ***** 2026-02-14 06:14:36.628929 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628940 | orchestrator | 2026-02-14 06:14:36.628951 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:14:36.628962 | orchestrator | Saturday 14 February 2026 06:14:16 +0000 (0:00:01.116) 0:37:28.620 ***** 2026-02-14 06:14:36.628973 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.628984 | orchestrator | 2026-02-14 06:14:36.629003 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:14:36.629014 | orchestrator | Saturday 14 February 2026 06:14:17 +0000 (0:00:01.128) 0:37:29.749 ***** 2026-02-14 06:14:36.629025 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629036 | orchestrator | 2026-02-14 06:14:36.629047 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:14:36.629058 | orchestrator | Saturday 14 February 2026 06:14:18 +0000 (0:00:01.173) 0:37:30.922 ***** 2026-02-14 06:14:36.629069 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629080 | orchestrator | 2026-02-14 06:14:36.629090 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:14:36.629101 | orchestrator | Saturday 14 February 2026 06:14:19 +0000 (0:00:01.180) 0:37:32.103 ***** 2026-02-14 06:14:36.629112 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629122 | orchestrator | 2026-02-14 06:14:36.629133 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:14:36.629144 | orchestrator | Saturday 14 February 2026 06:14:20 +0000 (0:00:01.126) 0:37:33.229 ***** 2026-02-14 06:14:36.629155 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629166 | orchestrator | 2026-02-14 06:14:36.629176 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:14:36.629187 | orchestrator | Saturday 14 February 2026 06:14:22 +0000 (0:00:01.142) 0:37:34.372 ***** 2026-02-14 06:14:36.629198 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629208 | orchestrator | 2026-02-14 06:14:36.629219 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:14:36.629230 | orchestrator | Saturday 14 February 2026 06:14:23 +0000 (0:00:01.228) 0:37:35.600 ***** 2026-02-14 06:14:36.629240 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629251 | orchestrator | 2026-02-14 06:14:36.629262 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:14:36.629272 | orchestrator | Saturday 14 February 2026 06:14:24 +0000 (0:00:01.135) 0:37:36.736 ***** 2026-02-14 06:14:36.629283 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.629294 | orchestrator | 2026-02-14 06:14:36.629305 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:14:36.629316 | orchestrator | Saturday 14 February 2026 06:14:26 +0000 (0:00:01.966) 0:37:38.702 ***** 2026-02-14 06:14:36.629326 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.629337 | orchestrator | 2026-02-14 06:14:36.629348 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:14:36.629359 | orchestrator | Saturday 14 February 2026 06:14:28 +0000 (0:00:02.314) 0:37:41.017 ***** 2026-02-14 06:14:36.629370 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-14 06:14:36.629381 | orchestrator | 2026-02-14 06:14:36.629391 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:14:36.629402 | orchestrator | Saturday 14 February 2026 06:14:29 +0000 (0:00:01.149) 0:37:42.166 ***** 2026-02-14 06:14:36.629413 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629424 | orchestrator | 2026-02-14 06:14:36.629435 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:14:36.629446 | orchestrator | Saturday 14 February 2026 06:14:31 +0000 (0:00:01.184) 0:37:43.351 ***** 2026-02-14 06:14:36.629456 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629467 | orchestrator | 2026-02-14 06:14:36.629478 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:14:36.629489 | orchestrator | Saturday 14 February 2026 06:14:32 +0000 (0:00:01.131) 0:37:44.482 ***** 2026-02-14 06:14:36.629500 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:14:36.629510 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:14:36.629521 | orchestrator | 2026-02-14 06:14:36.629532 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:14:36.629549 | orchestrator | Saturday 14 February 2026 06:14:33 +0000 (0:00:01.838) 0:37:46.321 ***** 2026-02-14 06:14:36.629560 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:14:36.629571 | orchestrator | 2026-02-14 06:14:36.629582 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:14:36.629593 | orchestrator | Saturday 14 February 2026 06:14:35 +0000 (0:00:01.480) 0:37:47.802 ***** 2026-02-14 06:14:36.629603 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:14:36.629614 | orchestrator | 2026-02-14 06:14:36.629625 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:14:36.629642 | orchestrator | Saturday 14 February 2026 06:14:36 +0000 (0:00:01.138) 0:37:48.941 ***** 2026-02-14 06:15:24.708127 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708207 | orchestrator | 2026-02-14 06:15:24.708214 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:15:24.708221 | orchestrator | Saturday 14 February 2026 06:14:37 +0000 (0:00:01.204) 0:37:50.145 ***** 2026-02-14 06:15:24.708226 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708231 | orchestrator | 2026-02-14 06:15:24.708236 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:15:24.708241 | orchestrator | Saturday 14 February 2026 06:14:39 +0000 (0:00:01.327) 0:37:51.473 ***** 2026-02-14 06:15:24.708246 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-14 06:15:24.708252 | orchestrator | 2026-02-14 06:15:24.708257 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:15:24.708274 | orchestrator | Saturday 14 February 2026 06:14:40 +0000 (0:00:01.138) 0:37:52.611 ***** 2026-02-14 06:15:24.708279 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:15:24.708285 | orchestrator | 2026-02-14 06:15:24.708290 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:15:24.708295 | orchestrator | Saturday 14 February 2026 06:14:42 +0000 (0:00:01.741) 0:37:54.353 ***** 2026-02-14 06:15:24.708299 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:15:24.708304 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:15:24.708309 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:15:24.708313 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708318 | orchestrator | 2026-02-14 06:15:24.708322 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:15:24.708327 | orchestrator | Saturday 14 February 2026 06:14:43 +0000 (0:00:01.191) 0:37:55.545 ***** 2026-02-14 06:15:24.708332 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708337 | orchestrator | 2026-02-14 06:15:24.708341 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:15:24.708346 | orchestrator | Saturday 14 February 2026 06:14:44 +0000 (0:00:01.107) 0:37:56.653 ***** 2026-02-14 06:15:24.708351 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708355 | orchestrator | 2026-02-14 06:15:24.708360 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:15:24.708364 | orchestrator | Saturday 14 February 2026 06:14:45 +0000 (0:00:01.176) 0:37:57.829 ***** 2026-02-14 06:15:24.708369 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708374 | orchestrator | 2026-02-14 06:15:24.708378 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:15:24.708383 | orchestrator | Saturday 14 February 2026 06:14:46 +0000 (0:00:01.124) 0:37:58.954 ***** 2026-02-14 06:15:24.708387 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708392 | orchestrator | 2026-02-14 06:15:24.708396 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:15:24.708401 | orchestrator | Saturday 14 February 2026 06:14:47 +0000 (0:00:01.185) 0:38:00.140 ***** 2026-02-14 06:15:24.708405 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708425 | orchestrator | 2026-02-14 06:15:24.708430 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:15:24.708434 | orchestrator | Saturday 14 February 2026 06:14:48 +0000 (0:00:01.178) 0:38:01.318 ***** 2026-02-14 06:15:24.708439 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:15:24.708443 | orchestrator | 2026-02-14 06:15:24.708448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:15:24.708452 | orchestrator | Saturday 14 February 2026 06:14:51 +0000 (0:00:02.419) 0:38:03.737 ***** 2026-02-14 06:15:24.708457 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:15:24.708461 | orchestrator | 2026-02-14 06:15:24.708466 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:15:24.708470 | orchestrator | Saturday 14 February 2026 06:14:52 +0000 (0:00:01.208) 0:38:04.945 ***** 2026-02-14 06:15:24.708475 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-14 06:15:24.708480 | orchestrator | 2026-02-14 06:15:24.708484 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:15:24.708489 | orchestrator | Saturday 14 February 2026 06:14:53 +0000 (0:00:01.129) 0:38:06.075 ***** 2026-02-14 06:15:24.708493 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708498 | orchestrator | 2026-02-14 06:15:24.708503 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:15:24.708507 | orchestrator | Saturday 14 February 2026 06:14:54 +0000 (0:00:01.249) 0:38:07.325 ***** 2026-02-14 06:15:24.708512 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708516 | orchestrator | 2026-02-14 06:15:24.708521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:15:24.708525 | orchestrator | Saturday 14 February 2026 06:14:56 +0000 (0:00:01.211) 0:38:08.537 ***** 2026-02-14 06:15:24.708529 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708534 | orchestrator | 2026-02-14 06:15:24.708538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:15:24.708543 | orchestrator | Saturday 14 February 2026 06:14:57 +0000 (0:00:01.156) 0:38:09.693 ***** 2026-02-14 06:15:24.708547 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708552 | orchestrator | 2026-02-14 06:15:24.708557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:15:24.708561 | orchestrator | Saturday 14 February 2026 06:14:58 +0000 (0:00:01.161) 0:38:10.855 ***** 2026-02-14 06:15:24.708566 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708570 | orchestrator | 2026-02-14 06:15:24.708575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:15:24.708579 | orchestrator | Saturday 14 February 2026 06:14:59 +0000 (0:00:01.288) 0:38:12.143 ***** 2026-02-14 06:15:24.708584 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708588 | orchestrator | 2026-02-14 06:15:24.708602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:15:24.708607 | orchestrator | Saturday 14 February 2026 06:15:00 +0000 (0:00:01.144) 0:38:13.288 ***** 2026-02-14 06:15:24.708612 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708616 | orchestrator | 2026-02-14 06:15:24.708621 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:15:24.708625 | orchestrator | Saturday 14 February 2026 06:15:02 +0000 (0:00:01.176) 0:38:14.465 ***** 2026-02-14 06:15:24.708630 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708634 | orchestrator | 2026-02-14 06:15:24.708639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:15:24.708644 | orchestrator | Saturday 14 February 2026 06:15:03 +0000 (0:00:01.200) 0:38:15.665 ***** 2026-02-14 06:15:24.708648 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:15:24.708653 | orchestrator | 2026-02-14 06:15:24.708657 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:15:24.708662 | orchestrator | Saturday 14 February 2026 06:15:04 +0000 (0:00:01.157) 0:38:16.823 ***** 2026-02-14 06:15:24.708671 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-14 06:15:24.708676 | orchestrator | 2026-02-14 06:15:24.708680 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:15:24.708685 | orchestrator | Saturday 14 February 2026 06:15:05 +0000 (0:00:01.217) 0:38:18.041 ***** 2026-02-14 06:15:24.708690 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-14 06:15:24.708696 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-14 06:15:24.708701 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-14 06:15:24.708707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-14 06:15:24.708712 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-14 06:15:24.708717 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-14 06:15:24.708722 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-14 06:15:24.708727 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:15:24.708733 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:15:24.708738 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:15:24.708743 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:15:24.708748 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:15:24.708753 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:15:24.708758 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:15:24.708764 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-14 06:15:24.708769 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-14 06:15:24.708774 | orchestrator | 2026-02-14 06:15:24.708779 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:15:24.708784 | orchestrator | Saturday 14 February 2026 06:15:12 +0000 (0:00:06.692) 0:38:24.734 ***** 2026-02-14 06:15:24.708790 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-14 06:15:24.708795 | orchestrator | 2026-02-14 06:15:24.708800 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:15:24.708805 | orchestrator | Saturday 14 February 2026 06:15:14 +0000 (0:00:01.742) 0:38:26.476 ***** 2026-02-14 06:15:24.708811 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:15:24.708817 | orchestrator | 2026-02-14 06:15:24.708822 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:15:24.708827 | orchestrator | Saturday 14 February 2026 06:15:15 +0000 (0:00:01.542) 0:38:28.019 ***** 2026-02-14 06:15:24.708833 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:15:24.708838 | orchestrator | 2026-02-14 06:15:24.708843 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:15:24.708848 | orchestrator | Saturday 14 February 2026 06:15:17 +0000 (0:00:01.997) 0:38:30.016 ***** 2026-02-14 06:15:24.708853 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708859 | orchestrator | 2026-02-14 06:15:24.708864 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:15:24.708869 | orchestrator | Saturday 14 February 2026 06:15:18 +0000 (0:00:01.170) 0:38:31.186 ***** 2026-02-14 06:15:24.708874 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708879 | orchestrator | 2026-02-14 06:15:24.708885 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:15:24.708890 | orchestrator | Saturday 14 February 2026 06:15:20 +0000 (0:00:01.167) 0:38:32.354 ***** 2026-02-14 06:15:24.708895 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708900 | orchestrator | 2026-02-14 06:15:24.708905 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:15:24.708914 | orchestrator | Saturday 14 February 2026 06:15:21 +0000 (0:00:01.159) 0:38:33.514 ***** 2026-02-14 06:15:24.708919 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708925 | orchestrator | 2026-02-14 06:15:24.708930 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:15:24.708935 | orchestrator | Saturday 14 February 2026 06:15:22 +0000 (0:00:01.190) 0:38:34.704 ***** 2026-02-14 06:15:24.708940 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.708945 | orchestrator | 2026-02-14 06:15:24.708951 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:15:24.708956 | orchestrator | Saturday 14 February 2026 06:15:23 +0000 (0:00:01.179) 0:38:35.884 ***** 2026-02-14 06:15:24.708961 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:15:24.709028 | orchestrator | 2026-02-14 06:15:24.709041 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:16:17.111461 | orchestrator | Saturday 14 February 2026 06:15:24 +0000 (0:00:01.138) 0:38:37.023 ***** 2026-02-14 06:16:17.111583 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.111602 | orchestrator | 2026-02-14 06:16:17.111615 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:16:17.111628 | orchestrator | Saturday 14 February 2026 06:15:25 +0000 (0:00:01.151) 0:38:38.174 ***** 2026-02-14 06:16:17.111639 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.111651 | orchestrator | 2026-02-14 06:16:17.111662 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:16:17.111673 | orchestrator | Saturday 14 February 2026 06:15:27 +0000 (0:00:01.174) 0:38:39.349 ***** 2026-02-14 06:16:17.111700 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.111712 | orchestrator | 2026-02-14 06:16:17.111723 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:16:17.111734 | orchestrator | Saturday 14 February 2026 06:15:28 +0000 (0:00:01.142) 0:38:40.491 ***** 2026-02-14 06:16:17.111745 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.111756 | orchestrator | 2026-02-14 06:16:17.111767 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:16:17.111778 | orchestrator | Saturday 14 February 2026 06:15:29 +0000 (0:00:01.226) 0:38:41.718 ***** 2026-02-14 06:16:17.111788 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.111800 | orchestrator | 2026-02-14 06:16:17.111811 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:16:17.111822 | orchestrator | Saturday 14 February 2026 06:15:30 +0000 (0:00:01.217) 0:38:42.935 ***** 2026-02-14 06:16:17.111833 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:16:17.111844 | orchestrator | 2026-02-14 06:16:17.111855 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:16:17.111866 | orchestrator | Saturday 14 February 2026 06:15:35 +0000 (0:00:04.457) 0:38:47.393 ***** 2026-02-14 06:16:17.111877 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:16:17.111889 | orchestrator | 2026-02-14 06:16:17.111901 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:16:17.111911 | orchestrator | Saturday 14 February 2026 06:15:36 +0000 (0:00:01.223) 0:38:48.617 ***** 2026-02-14 06:16:17.111925 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-14 06:16:17.111939 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-14 06:16:17.111975 | orchestrator | 2026-02-14 06:16:17.111988 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:16:17.112001 | orchestrator | Saturday 14 February 2026 06:15:44 +0000 (0:00:07.838) 0:38:56.455 ***** 2026-02-14 06:16:17.112013 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112026 | orchestrator | 2026-02-14 06:16:17.112038 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:16:17.112050 | orchestrator | Saturday 14 February 2026 06:15:45 +0000 (0:00:01.153) 0:38:57.609 ***** 2026-02-14 06:16:17.112063 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112075 | orchestrator | 2026-02-14 06:16:17.112088 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:16:17.112101 | orchestrator | Saturday 14 February 2026 06:15:46 +0000 (0:00:01.165) 0:38:58.774 ***** 2026-02-14 06:16:17.112114 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112127 | orchestrator | 2026-02-14 06:16:17.112139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:16:17.112152 | orchestrator | Saturday 14 February 2026 06:15:47 +0000 (0:00:01.300) 0:39:00.075 ***** 2026-02-14 06:16:17.112165 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112207 | orchestrator | 2026-02-14 06:16:17.112220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:16:17.112231 | orchestrator | Saturday 14 February 2026 06:15:48 +0000 (0:00:01.186) 0:39:01.262 ***** 2026-02-14 06:16:17.112242 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112253 | orchestrator | 2026-02-14 06:16:17.112264 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:16:17.112275 | orchestrator | Saturday 14 February 2026 06:15:50 +0000 (0:00:01.178) 0:39:02.440 ***** 2026-02-14 06:16:17.112285 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.112296 | orchestrator | 2026-02-14 06:16:17.112307 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:16:17.112317 | orchestrator | Saturday 14 February 2026 06:15:51 +0000 (0:00:01.354) 0:39:03.795 ***** 2026-02-14 06:16:17.112328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:16:17.112339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:16:17.112350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:16:17.112361 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112371 | orchestrator | 2026-02-14 06:16:17.112382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:16:17.112411 | orchestrator | Saturday 14 February 2026 06:15:53 +0000 (0:00:01.886) 0:39:05.681 ***** 2026-02-14 06:16:17.112423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:16:17.112434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:16:17.112445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:16:17.112456 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112467 | orchestrator | 2026-02-14 06:16:17.112477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:16:17.112488 | orchestrator | Saturday 14 February 2026 06:15:55 +0000 (0:00:01.790) 0:39:07.471 ***** 2026-02-14 06:16:17.112499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:16:17.112515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:16:17.112527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:16:17.112538 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112548 | orchestrator | 2026-02-14 06:16:17.112559 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:16:17.112580 | orchestrator | Saturday 14 February 2026 06:15:57 +0000 (0:00:01.988) 0:39:09.460 ***** 2026-02-14 06:16:17.112591 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.112602 | orchestrator | 2026-02-14 06:16:17.112613 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:16:17.112623 | orchestrator | Saturday 14 February 2026 06:15:58 +0000 (0:00:01.305) 0:39:10.766 ***** 2026-02-14 06:16:17.112634 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:16:17.112645 | orchestrator | 2026-02-14 06:16:17.112656 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:16:17.112666 | orchestrator | Saturday 14 February 2026 06:15:59 +0000 (0:00:01.455) 0:39:12.221 ***** 2026-02-14 06:16:17.112677 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.112688 | orchestrator | 2026-02-14 06:16:17.112699 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-14 06:16:17.112709 | orchestrator | Saturday 14 February 2026 06:16:01 +0000 (0:00:01.844) 0:39:14.066 ***** 2026-02-14 06:16:17.112720 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.112731 | orchestrator | 2026-02-14 06:16:17.112742 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:16:17.112752 | orchestrator | Saturday 14 February 2026 06:16:02 +0000 (0:00:01.172) 0:39:15.239 ***** 2026-02-14 06:16:17.112763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:16:17.112775 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:16:17.112785 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:16:17.112796 | orchestrator | 2026-02-14 06:16:17.112807 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-14 06:16:17.112818 | orchestrator | Saturday 14 February 2026 06:16:04 +0000 (0:00:01.848) 0:39:17.088 ***** 2026-02-14 06:16:17.112828 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-14 06:16:17.112839 | orchestrator | 2026-02-14 06:16:17.112850 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-14 06:16:17.112860 | orchestrator | Saturday 14 February 2026 06:16:06 +0000 (0:00:01.543) 0:39:18.631 ***** 2026-02-14 06:16:17.112871 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112882 | orchestrator | 2026-02-14 06:16:17.112893 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-14 06:16:17.112903 | orchestrator | Saturday 14 February 2026 06:16:07 +0000 (0:00:01.183) 0:39:19.815 ***** 2026-02-14 06:16:17.112914 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.112925 | orchestrator | 2026-02-14 06:16:17.112936 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-14 06:16:17.112946 | orchestrator | Saturday 14 February 2026 06:16:08 +0000 (0:00:01.183) 0:39:20.998 ***** 2026-02-14 06:16:17.112957 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.112968 | orchestrator | 2026-02-14 06:16:17.112979 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-14 06:16:17.112989 | orchestrator | Saturday 14 February 2026 06:16:10 +0000 (0:00:01.515) 0:39:22.513 ***** 2026-02-14 06:16:17.113000 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:16:17.113011 | orchestrator | 2026-02-14 06:16:17.113022 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-14 06:16:17.113032 | orchestrator | Saturday 14 February 2026 06:16:11 +0000 (0:00:01.230) 0:39:23.744 ***** 2026-02-14 06:16:17.113043 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 06:16:17.113054 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 06:16:17.113065 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 06:16:17.113075 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 06:16:17.113093 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 06:16:17.113104 | orchestrator | 2026-02-14 06:16:17.113115 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-14 06:16:17.113126 | orchestrator | Saturday 14 February 2026 06:16:14 +0000 (0:00:03.082) 0:39:26.826 ***** 2026-02-14 06:16:17.113137 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:16:17.113148 | orchestrator | 2026-02-14 06:16:17.113159 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-14 06:16:17.113169 | orchestrator | Saturday 14 February 2026 06:16:15 +0000 (0:00:01.132) 0:39:27.959 ***** 2026-02-14 06:16:17.113206 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-14 06:16:17.113226 | orchestrator | 2026-02-14 06:16:17.113248 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-14 06:17:27.008872 | orchestrator | Saturday 14 February 2026 06:16:17 +0000 (0:00:01.468) 0:39:29.428 ***** 2026-02-14 06:17:27.009018 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 06:17:27.009046 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-14 06:17:27.009069 | orchestrator | 2026-02-14 06:17:27.009090 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-14 06:17:27.009111 | orchestrator | Saturday 14 February 2026 06:16:18 +0000 (0:00:01.900) 0:39:31.328 ***** 2026-02-14 06:17:27.009131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:17:27.009151 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 06:17:27.009192 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:17:27.009212 | orchestrator | 2026-02-14 06:17:27.009233 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:17:27.009254 | orchestrator | Saturday 14 February 2026 06:16:22 +0000 (0:00:03.202) 0:39:34.530 ***** 2026-02-14 06:17:27.009273 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-14 06:17:27.009337 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 06:17:27.009358 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.009378 | orchestrator | 2026-02-14 06:17:27.009456 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-14 06:17:27.009480 | orchestrator | Saturday 14 February 2026 06:16:24 +0000 (0:00:02.012) 0:39:36.543 ***** 2026-02-14 06:17:27.009501 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.009521 | orchestrator | 2026-02-14 06:17:27.009540 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-14 06:17:27.009559 | orchestrator | Saturday 14 February 2026 06:16:25 +0000 (0:00:01.315) 0:39:37.859 ***** 2026-02-14 06:17:27.009579 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.009598 | orchestrator | 2026-02-14 06:17:27.009617 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-14 06:17:27.009636 | orchestrator | Saturday 14 February 2026 06:16:26 +0000 (0:00:01.219) 0:39:39.079 ***** 2026-02-14 06:17:27.009654 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.009673 | orchestrator | 2026-02-14 06:17:27.009690 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-14 06:17:27.009709 | orchestrator | Saturday 14 February 2026 06:16:27 +0000 (0:00:01.141) 0:39:40.221 ***** 2026-02-14 06:17:27.009727 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-14 06:17:27.009748 | orchestrator | 2026-02-14 06:17:27.009766 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-14 06:17:27.009784 | orchestrator | Saturday 14 February 2026 06:16:29 +0000 (0:00:01.540) 0:39:41.761 ***** 2026-02-14 06:17:27.009803 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.009821 | orchestrator | 2026-02-14 06:17:27.009841 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-14 06:17:27.009861 | orchestrator | Saturday 14 February 2026 06:16:31 +0000 (0:00:01.575) 0:39:43.337 ***** 2026-02-14 06:17:27.009915 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.009935 | orchestrator | 2026-02-14 06:17:27.009953 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-14 06:17:27.009973 | orchestrator | Saturday 14 February 2026 06:16:35 +0000 (0:00:04.050) 0:39:47.388 ***** 2026-02-14 06:17:27.009993 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-14 06:17:27.010010 | orchestrator | 2026-02-14 06:17:27.010099 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-14 06:17:27.010110 | orchestrator | Saturday 14 February 2026 06:16:36 +0000 (0:00:01.526) 0:39:48.914 ***** 2026-02-14 06:17:27.010121 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.010132 | orchestrator | 2026-02-14 06:17:27.010180 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-14 06:17:27.010192 | orchestrator | Saturday 14 February 2026 06:16:38 +0000 (0:00:02.010) 0:39:50.925 ***** 2026-02-14 06:17:27.010203 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.010214 | orchestrator | 2026-02-14 06:17:27.010225 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-14 06:17:27.010235 | orchestrator | Saturday 14 February 2026 06:16:40 +0000 (0:00:01.989) 0:39:52.915 ***** 2026-02-14 06:17:27.010246 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:17:27.010257 | orchestrator | 2026-02-14 06:17:27.010268 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-14 06:17:27.010278 | orchestrator | Saturday 14 February 2026 06:16:42 +0000 (0:00:02.303) 0:39:55.219 ***** 2026-02-14 06:17:27.010289 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010300 | orchestrator | 2026-02-14 06:17:27.010311 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-14 06:17:27.010322 | orchestrator | Saturday 14 February 2026 06:16:44 +0000 (0:00:01.169) 0:39:56.388 ***** 2026-02-14 06:17:27.010332 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010343 | orchestrator | 2026-02-14 06:17:27.010354 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-14 06:17:27.010364 | orchestrator | Saturday 14 February 2026 06:16:45 +0000 (0:00:01.185) 0:39:57.573 ***** 2026-02-14 06:17:27.010375 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:17:27.010386 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-14 06:17:27.010396 | orchestrator | 2026-02-14 06:17:27.010443 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-14 06:17:27.010455 | orchestrator | Saturday 14 February 2026 06:16:47 +0000 (0:00:01.834) 0:39:59.408 ***** 2026-02-14 06:17:27.010465 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:17:27.010476 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-14 06:17:27.010487 | orchestrator | 2026-02-14 06:17:27.010498 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-14 06:17:27.010509 | orchestrator | Saturday 14 February 2026 06:16:49 +0000 (0:00:02.911) 0:40:02.320 ***** 2026-02-14 06:17:27.010520 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-14 06:17:27.010555 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-14 06:17:27.010567 | orchestrator | 2026-02-14 06:17:27.010578 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-14 06:17:27.010590 | orchestrator | Saturday 14 February 2026 06:16:54 +0000 (0:00:04.738) 0:40:07.058 ***** 2026-02-14 06:17:27.010600 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010611 | orchestrator | 2026-02-14 06:17:27.010622 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-14 06:17:27.010633 | orchestrator | Saturday 14 February 2026 06:16:56 +0000 (0:00:01.280) 0:40:08.339 ***** 2026-02-14 06:17:27.010644 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010654 | orchestrator | 2026-02-14 06:17:27.010665 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-14 06:17:27.010687 | orchestrator | Saturday 14 February 2026 06:16:57 +0000 (0:00:01.267) 0:40:09.607 ***** 2026-02-14 06:17:27.010709 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010720 | orchestrator | 2026-02-14 06:17:27.010731 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-14 06:17:27.010741 | orchestrator | Saturday 14 February 2026 06:16:59 +0000 (0:00:01.938) 0:40:11.545 ***** 2026-02-14 06:17:27.010752 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010763 | orchestrator | 2026-02-14 06:17:27.010774 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-14 06:17:27.010785 | orchestrator | Saturday 14 February 2026 06:17:00 +0000 (0:00:01.169) 0:40:12.715 ***** 2026-02-14 06:17:27.010796 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:17:27.010806 | orchestrator | 2026-02-14 06:17:27.010817 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-14 06:17:27.010828 | orchestrator | Saturday 14 February 2026 06:17:01 +0000 (0:00:01.118) 0:40:13.833 ***** 2026-02-14 06:17:27.010839 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-14 06:17:27.010851 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-14 06:17:27.010862 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-14 06:17:27.010873 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:17:27.010884 | orchestrator | 2026-02-14 06:17:27.010895 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-14 06:17:27.010906 | orchestrator | 2026-02-14 06:17:27.010917 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:17:27.010928 | orchestrator | Saturday 14 February 2026 06:17:12 +0000 (0:00:10.969) 0:40:24.803 ***** 2026-02-14 06:17:27.010939 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-14 06:17:27.010950 | orchestrator | 2026-02-14 06:17:27.010961 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:17:27.010971 | orchestrator | Saturday 14 February 2026 06:17:13 +0000 (0:00:01.109) 0:40:25.912 ***** 2026-02-14 06:17:27.010983 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.010993 | orchestrator | 2026-02-14 06:17:27.011004 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:17:27.011015 | orchestrator | Saturday 14 February 2026 06:17:15 +0000 (0:00:01.471) 0:40:27.383 ***** 2026-02-14 06:17:27.011026 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011036 | orchestrator | 2026-02-14 06:17:27.011047 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:17:27.011058 | orchestrator | Saturday 14 February 2026 06:17:16 +0000 (0:00:01.129) 0:40:28.513 ***** 2026-02-14 06:17:27.011069 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011079 | orchestrator | 2026-02-14 06:17:27.011090 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:17:27.011101 | orchestrator | Saturday 14 February 2026 06:17:17 +0000 (0:00:01.570) 0:40:30.084 ***** 2026-02-14 06:17:27.011112 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011123 | orchestrator | 2026-02-14 06:17:27.011133 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:17:27.011144 | orchestrator | Saturday 14 February 2026 06:17:18 +0000 (0:00:01.230) 0:40:31.314 ***** 2026-02-14 06:17:27.011155 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011166 | orchestrator | 2026-02-14 06:17:27.011177 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:17:27.011188 | orchestrator | Saturday 14 February 2026 06:17:20 +0000 (0:00:01.120) 0:40:32.435 ***** 2026-02-14 06:17:27.011199 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011210 | orchestrator | 2026-02-14 06:17:27.011220 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:17:27.011231 | orchestrator | Saturday 14 February 2026 06:17:21 +0000 (0:00:01.389) 0:40:33.825 ***** 2026-02-14 06:17:27.011248 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:27.011259 | orchestrator | 2026-02-14 06:17:27.011270 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:17:27.011281 | orchestrator | Saturday 14 February 2026 06:17:22 +0000 (0:00:01.164) 0:40:34.990 ***** 2026-02-14 06:17:27.011292 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:27.011303 | orchestrator | 2026-02-14 06:17:27.011314 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:17:27.011324 | orchestrator | Saturday 14 February 2026 06:17:23 +0000 (0:00:01.191) 0:40:36.181 ***** 2026-02-14 06:17:27.011335 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:17:27.011346 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:17:27.011357 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:17:27.011367 | orchestrator | 2026-02-14 06:17:27.011378 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:17:27.011389 | orchestrator | Saturday 14 February 2026 06:17:25 +0000 (0:00:01.868) 0:40:38.050 ***** 2026-02-14 06:17:27.011474 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.469691 | orchestrator | 2026-02-14 06:17:51.469794 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:17:51.469807 | orchestrator | Saturday 14 February 2026 06:17:26 +0000 (0:00:01.271) 0:40:39.322 ***** 2026-02-14 06:17:51.469815 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:17:51.469823 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:17:51.469831 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:17:51.469838 | orchestrator | 2026-02-14 06:17:51.469846 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:17:51.469867 | orchestrator | Saturday 14 February 2026 06:17:29 +0000 (0:00:02.944) 0:40:42.266 ***** 2026-02-14 06:17:51.469875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:17:51.469883 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:17:51.469891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:17:51.469898 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.469906 | orchestrator | 2026-02-14 06:17:51.469913 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:17:51.469920 | orchestrator | Saturday 14 February 2026 06:17:31 +0000 (0:00:01.507) 0:40:43.774 ***** 2026-02-14 06:17:51.469928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.469939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.469947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.469954 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.469961 | orchestrator | 2026-02-14 06:17:51.469969 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:17:51.469976 | orchestrator | Saturday 14 February 2026 06:17:33 +0000 (0:00:01.705) 0:40:45.479 ***** 2026-02-14 06:17:51.469985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.470013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.470067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:51.470074 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470082 | orchestrator | 2026-02-14 06:17:51.470089 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:17:51.470096 | orchestrator | Saturday 14 February 2026 06:17:34 +0000 (0:00:01.192) 0:40:46.672 ***** 2026-02-14 06:17:51.470120 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:17:27.561096', 'end': '2026-02-14 06:17:27.615032', 'delta': '0:00:00.053936', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:17:51.470134 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:17:28.143058', 'end': '2026-02-14 06:17:28.183099', 'delta': '0:00:00.040041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:17:51.470143 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:17:28.710511', 'end': '2026-02-14 06:17:28.751080', 'delta': '0:00:00.040569', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:17:51.470151 | orchestrator | 2026-02-14 06:17:51.470158 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:17:51.470165 | orchestrator | Saturday 14 February 2026 06:17:35 +0000 (0:00:01.252) 0:40:47.925 ***** 2026-02-14 06:17:51.470173 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.470187 | orchestrator | 2026-02-14 06:17:51.470194 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:17:51.470201 | orchestrator | Saturday 14 February 2026 06:17:36 +0000 (0:00:01.286) 0:40:49.211 ***** 2026-02-14 06:17:51.470209 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470216 | orchestrator | 2026-02-14 06:17:51.470223 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:17:51.470231 | orchestrator | Saturday 14 February 2026 06:17:38 +0000 (0:00:01.314) 0:40:50.525 ***** 2026-02-14 06:17:51.470239 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.470247 | orchestrator | 2026-02-14 06:17:51.470256 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:17:51.470264 | orchestrator | Saturday 14 February 2026 06:17:39 +0000 (0:00:01.183) 0:40:51.709 ***** 2026-02-14 06:17:51.470272 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:17:51.470280 | orchestrator | 2026-02-14 06:17:51.470288 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:17:51.470297 | orchestrator | Saturday 14 February 2026 06:17:41 +0000 (0:00:02.462) 0:40:54.171 ***** 2026-02-14 06:17:51.470305 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.470313 | orchestrator | 2026-02-14 06:17:51.470321 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:17:51.470329 | orchestrator | Saturday 14 February 2026 06:17:43 +0000 (0:00:01.156) 0:40:55.328 ***** 2026-02-14 06:17:51.470337 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470345 | orchestrator | 2026-02-14 06:17:51.470354 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:17:51.470362 | orchestrator | Saturday 14 February 2026 06:17:44 +0000 (0:00:01.338) 0:40:56.667 ***** 2026-02-14 06:17:51.470370 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470377 | orchestrator | 2026-02-14 06:17:51.470384 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:17:51.470391 | orchestrator | Saturday 14 February 2026 06:17:45 +0000 (0:00:01.297) 0:40:57.964 ***** 2026-02-14 06:17:51.470399 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470406 | orchestrator | 2026-02-14 06:17:51.470413 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:17:51.470420 | orchestrator | Saturday 14 February 2026 06:17:46 +0000 (0:00:01.108) 0:40:59.073 ***** 2026-02-14 06:17:51.470428 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470435 | orchestrator | 2026-02-14 06:17:51.470442 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:17:51.470449 | orchestrator | Saturday 14 February 2026 06:17:47 +0000 (0:00:01.152) 0:41:00.226 ***** 2026-02-14 06:17:51.470456 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.470463 | orchestrator | 2026-02-14 06:17:51.470490 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:17:51.470497 | orchestrator | Saturday 14 February 2026 06:17:49 +0000 (0:00:01.250) 0:41:01.477 ***** 2026-02-14 06:17:51.470505 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:51.470512 | orchestrator | 2026-02-14 06:17:51.470519 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:17:51.470526 | orchestrator | Saturday 14 February 2026 06:17:50 +0000 (0:00:01.164) 0:41:02.642 ***** 2026-02-14 06:17:51.470533 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:51.470541 | orchestrator | 2026-02-14 06:17:51.470548 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:17:51.470560 | orchestrator | Saturday 14 February 2026 06:17:51 +0000 (0:00:01.141) 0:41:03.783 ***** 2026-02-14 06:17:54.066625 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:54.066730 | orchestrator | 2026-02-14 06:17:54.066744 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:17:54.066763 | orchestrator | Saturday 14 February 2026 06:17:52 +0000 (0:00:01.157) 0:41:04.941 ***** 2026-02-14 06:17:54.066800 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:17:54.066811 | orchestrator | 2026-02-14 06:17:54.066820 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:17:54.066829 | orchestrator | Saturday 14 February 2026 06:17:53 +0000 (0:00:01.179) 0:41:06.120 ***** 2026-02-14 06:17:54.066853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}})  2026-02-14 06:17:54.066880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:17:54.066891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}})  2026-02-14 06:17:54.066901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:17:54.066958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.066986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}})  2026-02-14 06:17:54.066995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}})  2026-02-14 06:17:54.067004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:54.067030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:17:55.405832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:55.405914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:17:55.405926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:17:55.405937 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:17:55.405945 | orchestrator | 2026-02-14 06:17:55.405953 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:17:55.405962 | orchestrator | Saturday 14 February 2026 06:17:55 +0000 (0:00:01.385) 0:41:07.506 ***** 2026-02-14 06:17:55.405972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406074 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406100 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406141 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:17:55.406165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:18:01.901852 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:01.901867 | orchestrator | 2026-02-14 06:18:01.901876 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:18:01.901886 | orchestrator | Saturday 14 February 2026 06:17:56 +0000 (0:00:01.427) 0:41:08.933 ***** 2026-02-14 06:18:01.901894 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:01.901903 | orchestrator | 2026-02-14 06:18:01.901911 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:18:01.901919 | orchestrator | Saturday 14 February 2026 06:17:59 +0000 (0:00:02.657) 0:41:11.591 ***** 2026-02-14 06:18:01.901927 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:01.901934 | orchestrator | 2026-02-14 06:18:01.901942 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:18:01.901950 | orchestrator | Saturday 14 February 2026 06:18:00 +0000 (0:00:01.187) 0:41:12.778 ***** 2026-02-14 06:18:01.901958 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:01.901966 | orchestrator | 2026-02-14 06:18:01.901973 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:18:01.901988 | orchestrator | Saturday 14 February 2026 06:18:01 +0000 (0:00:01.436) 0:41:14.215 ***** 2026-02-14 06:18:44.220651 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.220790 | orchestrator | 2026-02-14 06:18:44.220809 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:18:44.220823 | orchestrator | Saturday 14 February 2026 06:18:03 +0000 (0:00:01.204) 0:41:15.420 ***** 2026-02-14 06:18:44.220835 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.220846 | orchestrator | 2026-02-14 06:18:44.220857 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:18:44.220868 | orchestrator | Saturday 14 February 2026 06:18:04 +0000 (0:00:01.359) 0:41:16.779 ***** 2026-02-14 06:18:44.220879 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.220890 | orchestrator | 2026-02-14 06:18:44.220901 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:18:44.220939 | orchestrator | Saturday 14 February 2026 06:18:05 +0000 (0:00:01.177) 0:41:17.957 ***** 2026-02-14 06:18:44.220952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 06:18:44.220963 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 06:18:44.220980 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 06:18:44.220999 | orchestrator | 2026-02-14 06:18:44.221017 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:18:44.221034 | orchestrator | Saturday 14 February 2026 06:18:07 +0000 (0:00:01.689) 0:41:19.647 ***** 2026-02-14 06:18:44.221053 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:18:44.221072 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:18:44.221090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:18:44.221106 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221117 | orchestrator | 2026-02-14 06:18:44.221127 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:18:44.221140 | orchestrator | Saturday 14 February 2026 06:18:08 +0000 (0:00:01.238) 0:41:20.885 ***** 2026-02-14 06:18:44.221153 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-14 06:18:44.221166 | orchestrator | 2026-02-14 06:18:44.221181 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:18:44.221195 | orchestrator | Saturday 14 February 2026 06:18:09 +0000 (0:00:01.321) 0:41:22.207 ***** 2026-02-14 06:18:44.221207 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221219 | orchestrator | 2026-02-14 06:18:44.221231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:18:44.221244 | orchestrator | Saturday 14 February 2026 06:18:11 +0000 (0:00:01.212) 0:41:23.420 ***** 2026-02-14 06:18:44.221256 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221268 | orchestrator | 2026-02-14 06:18:44.221280 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:18:44.221292 | orchestrator | Saturday 14 February 2026 06:18:12 +0000 (0:00:01.216) 0:41:24.637 ***** 2026-02-14 06:18:44.221305 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221317 | orchestrator | 2026-02-14 06:18:44.221329 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:18:44.221341 | orchestrator | Saturday 14 February 2026 06:18:13 +0000 (0:00:01.230) 0:41:25.867 ***** 2026-02-14 06:18:44.221355 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.221374 | orchestrator | 2026-02-14 06:18:44.221392 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:18:44.221412 | orchestrator | Saturday 14 February 2026 06:18:14 +0000 (0:00:01.317) 0:41:27.184 ***** 2026-02-14 06:18:44.221430 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:18:44.221450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:18:44.221469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:18:44.221487 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221498 | orchestrator | 2026-02-14 06:18:44.221523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:18:44.221534 | orchestrator | Saturday 14 February 2026 06:18:16 +0000 (0:00:01.469) 0:41:28.654 ***** 2026-02-14 06:18:44.221545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:18:44.221555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:18:44.221566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:18:44.221576 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221587 | orchestrator | 2026-02-14 06:18:44.221598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:18:44.221608 | orchestrator | Saturday 14 February 2026 06:18:17 +0000 (0:00:01.460) 0:41:30.115 ***** 2026-02-14 06:18:44.221689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:18:44.221702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:18:44.221713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:18:44.221724 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.221734 | orchestrator | 2026-02-14 06:18:44.221748 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:18:44.221766 | orchestrator | Saturday 14 February 2026 06:18:19 +0000 (0:00:01.404) 0:41:31.519 ***** 2026-02-14 06:18:44.221784 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.221802 | orchestrator | 2026-02-14 06:18:44.221821 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:18:44.221840 | orchestrator | Saturday 14 February 2026 06:18:20 +0000 (0:00:01.200) 0:41:32.720 ***** 2026-02-14 06:18:44.221860 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:18:44.221879 | orchestrator | 2026-02-14 06:18:44.221897 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:18:44.221912 | orchestrator | Saturday 14 February 2026 06:18:21 +0000 (0:00:01.370) 0:41:34.090 ***** 2026-02-14 06:18:44.221941 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:18:44.221953 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:18:44.221964 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:18:44.221974 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:18:44.221985 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:18:44.221996 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:18:44.222006 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:18:44.222078 | orchestrator | 2026-02-14 06:18:44.222090 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:18:44.222101 | orchestrator | Saturday 14 February 2026 06:18:23 +0000 (0:00:01.832) 0:41:35.923 ***** 2026-02-14 06:18:44.222112 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:18:44.222132 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:18:44.222151 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:18:44.222170 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:18:44.222190 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:18:44.222210 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:18:44.222231 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:18:44.222249 | orchestrator | 2026-02-14 06:18:44.222260 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-14 06:18:44.222271 | orchestrator | Saturday 14 February 2026 06:18:25 +0000 (0:00:02.295) 0:41:38.219 ***** 2026-02-14 06:18:44.222281 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222292 | orchestrator | 2026-02-14 06:18:44.222303 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-14 06:18:44.222313 | orchestrator | Saturday 14 February 2026 06:18:27 +0000 (0:00:01.115) 0:41:39.335 ***** 2026-02-14 06:18:44.222324 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222334 | orchestrator | 2026-02-14 06:18:44.222345 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-14 06:18:44.222355 | orchestrator | Saturday 14 February 2026 06:18:27 +0000 (0:00:00.791) 0:41:40.126 ***** 2026-02-14 06:18:44.222366 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222386 | orchestrator | 2026-02-14 06:18:44.222397 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-14 06:18:44.222408 | orchestrator | Saturday 14 February 2026 06:18:28 +0000 (0:00:00.942) 0:41:41.069 ***** 2026-02-14 06:18:44.222418 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-14 06:18:44.222429 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-14 06:18:44.222439 | orchestrator | 2026-02-14 06:18:44.222450 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:18:44.222461 | orchestrator | Saturday 14 February 2026 06:18:32 +0000 (0:00:03.767) 0:41:44.837 ***** 2026-02-14 06:18:44.222471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-14 06:18:44.222483 | orchestrator | 2026-02-14 06:18:44.222500 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:18:44.222519 | orchestrator | Saturday 14 February 2026 06:18:33 +0000 (0:00:01.260) 0:41:46.097 ***** 2026-02-14 06:18:44.222537 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-14 06:18:44.222555 | orchestrator | 2026-02-14 06:18:44.222593 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:18:44.222607 | orchestrator | Saturday 14 February 2026 06:18:34 +0000 (0:00:01.183) 0:41:47.280 ***** 2026-02-14 06:18:44.222641 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.222658 | orchestrator | 2026-02-14 06:18:44.222669 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:18:44.222680 | orchestrator | Saturday 14 February 2026 06:18:36 +0000 (0:00:01.151) 0:41:48.432 ***** 2026-02-14 06:18:44.222690 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222701 | orchestrator | 2026-02-14 06:18:44.222712 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:18:44.222722 | orchestrator | Saturday 14 February 2026 06:18:37 +0000 (0:00:01.519) 0:41:49.952 ***** 2026-02-14 06:18:44.222733 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222744 | orchestrator | 2026-02-14 06:18:44.222755 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:18:44.222765 | orchestrator | Saturday 14 February 2026 06:18:39 +0000 (0:00:01.574) 0:41:51.526 ***** 2026-02-14 06:18:44.222776 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:18:44.222786 | orchestrator | 2026-02-14 06:18:44.222797 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:18:44.222807 | orchestrator | Saturday 14 February 2026 06:18:40 +0000 (0:00:01.570) 0:41:53.097 ***** 2026-02-14 06:18:44.222818 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.222829 | orchestrator | 2026-02-14 06:18:44.222839 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:18:44.222851 | orchestrator | Saturday 14 February 2026 06:18:41 +0000 (0:00:01.126) 0:41:54.224 ***** 2026-02-14 06:18:44.222869 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.222887 | orchestrator | 2026-02-14 06:18:44.222940 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:18:44.222963 | orchestrator | Saturday 14 February 2026 06:18:43 +0000 (0:00:01.190) 0:41:55.414 ***** 2026-02-14 06:18:44.222977 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:18:44.222988 | orchestrator | 2026-02-14 06:18:44.223009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:19:25.194614 | orchestrator | Saturday 14 February 2026 06:18:44 +0000 (0:00:01.118) 0:41:56.533 ***** 2026-02-14 06:19:25.194796 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.194829 | orchestrator | 2026-02-14 06:19:25.194849 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:19:25.194867 | orchestrator | Saturday 14 February 2026 06:18:45 +0000 (0:00:01.623) 0:41:58.157 ***** 2026-02-14 06:19:25.194884 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.194903 | orchestrator | 2026-02-14 06:19:25.194923 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:19:25.194975 | orchestrator | Saturday 14 February 2026 06:18:47 +0000 (0:00:01.526) 0:41:59.684 ***** 2026-02-14 06:19:25.194996 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195016 | orchestrator | 2026-02-14 06:19:25.195035 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:19:25.195053 | orchestrator | Saturday 14 February 2026 06:18:48 +0000 (0:00:00.868) 0:42:00.553 ***** 2026-02-14 06:19:25.195071 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195090 | orchestrator | 2026-02-14 06:19:25.195110 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:19:25.195130 | orchestrator | Saturday 14 February 2026 06:18:49 +0000 (0:00:00.835) 0:42:01.388 ***** 2026-02-14 06:19:25.195149 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.195169 | orchestrator | 2026-02-14 06:19:25.195189 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:19:25.195208 | orchestrator | Saturday 14 February 2026 06:18:49 +0000 (0:00:00.830) 0:42:02.219 ***** 2026-02-14 06:19:25.195226 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.195238 | orchestrator | 2026-02-14 06:19:25.195252 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:19:25.195264 | orchestrator | Saturday 14 February 2026 06:18:50 +0000 (0:00:00.819) 0:42:03.038 ***** 2026-02-14 06:19:25.195277 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.195289 | orchestrator | 2026-02-14 06:19:25.195302 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:19:25.195314 | orchestrator | Saturday 14 February 2026 06:18:51 +0000 (0:00:00.805) 0:42:03.843 ***** 2026-02-14 06:19:25.195327 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195339 | orchestrator | 2026-02-14 06:19:25.195352 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:19:25.195363 | orchestrator | Saturday 14 February 2026 06:18:52 +0000 (0:00:00.793) 0:42:04.637 ***** 2026-02-14 06:19:25.195376 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195388 | orchestrator | 2026-02-14 06:19:25.195399 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:19:25.195412 | orchestrator | Saturday 14 February 2026 06:18:53 +0000 (0:00:00.793) 0:42:05.431 ***** 2026-02-14 06:19:25.195424 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195436 | orchestrator | 2026-02-14 06:19:25.195449 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:19:25.195461 | orchestrator | Saturday 14 February 2026 06:18:53 +0000 (0:00:00.827) 0:42:06.258 ***** 2026-02-14 06:19:25.195471 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.195482 | orchestrator | 2026-02-14 06:19:25.195493 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:19:25.195504 | orchestrator | Saturday 14 February 2026 06:18:54 +0000 (0:00:00.813) 0:42:07.073 ***** 2026-02-14 06:19:25.195514 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.195525 | orchestrator | 2026-02-14 06:19:25.195535 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:19:25.195546 | orchestrator | Saturday 14 February 2026 06:18:55 +0000 (0:00:00.803) 0:42:07.876 ***** 2026-02-14 06:19:25.195557 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195568 | orchestrator | 2026-02-14 06:19:25.195579 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:19:25.195590 | orchestrator | Saturday 14 February 2026 06:18:56 +0000 (0:00:00.793) 0:42:08.669 ***** 2026-02-14 06:19:25.195616 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195628 | orchestrator | 2026-02-14 06:19:25.195638 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:19:25.195649 | orchestrator | Saturday 14 February 2026 06:18:57 +0000 (0:00:00.764) 0:42:09.434 ***** 2026-02-14 06:19:25.195660 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195670 | orchestrator | 2026-02-14 06:19:25.195681 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:19:25.195704 | orchestrator | Saturday 14 February 2026 06:18:57 +0000 (0:00:00.768) 0:42:10.203 ***** 2026-02-14 06:19:25.195715 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195901 | orchestrator | 2026-02-14 06:19:25.195934 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:19:25.195946 | orchestrator | Saturday 14 February 2026 06:18:58 +0000 (0:00:00.884) 0:42:11.087 ***** 2026-02-14 06:19:25.195956 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.195967 | orchestrator | 2026-02-14 06:19:25.195978 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:19:25.195989 | orchestrator | Saturday 14 February 2026 06:18:59 +0000 (0:00:00.792) 0:42:11.879 ***** 2026-02-14 06:19:25.195999 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196010 | orchestrator | 2026-02-14 06:19:25.196021 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:19:25.196031 | orchestrator | Saturday 14 February 2026 06:19:00 +0000 (0:00:00.803) 0:42:12.683 ***** 2026-02-14 06:19:25.196042 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196053 | orchestrator | 2026-02-14 06:19:25.196063 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:19:25.196075 | orchestrator | Saturday 14 February 2026 06:19:01 +0000 (0:00:00.762) 0:42:13.447 ***** 2026-02-14 06:19:25.196095 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196106 | orchestrator | 2026-02-14 06:19:25.196117 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:19:25.196128 | orchestrator | Saturday 14 February 2026 06:19:01 +0000 (0:00:00.833) 0:42:14.281 ***** 2026-02-14 06:19:25.196163 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196175 | orchestrator | 2026-02-14 06:19:25.196185 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:19:25.196196 | orchestrator | Saturday 14 February 2026 06:19:02 +0000 (0:00:00.773) 0:42:15.054 ***** 2026-02-14 06:19:25.196207 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196218 | orchestrator | 2026-02-14 06:19:25.196229 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:19:25.196239 | orchestrator | Saturday 14 February 2026 06:19:03 +0000 (0:00:00.775) 0:42:15.830 ***** 2026-02-14 06:19:25.196250 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196261 | orchestrator | 2026-02-14 06:19:25.196272 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:19:25.196282 | orchestrator | Saturday 14 February 2026 06:19:04 +0000 (0:00:00.763) 0:42:16.594 ***** 2026-02-14 06:19:25.196293 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196304 | orchestrator | 2026-02-14 06:19:25.196315 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:19:25.196325 | orchestrator | Saturday 14 February 2026 06:19:05 +0000 (0:00:00.797) 0:42:17.391 ***** 2026-02-14 06:19:25.196336 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.196347 | orchestrator | 2026-02-14 06:19:25.196358 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:19:25.196369 | orchestrator | Saturday 14 February 2026 06:19:06 +0000 (0:00:01.602) 0:42:18.994 ***** 2026-02-14 06:19:25.196379 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.196390 | orchestrator | 2026-02-14 06:19:25.196401 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:19:25.196412 | orchestrator | Saturday 14 February 2026 06:19:08 +0000 (0:00:01.909) 0:42:20.903 ***** 2026-02-14 06:19:25.196422 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-14 06:19:25.196434 | orchestrator | 2026-02-14 06:19:25.196445 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:19:25.196455 | orchestrator | Saturday 14 February 2026 06:19:09 +0000 (0:00:01.398) 0:42:22.301 ***** 2026-02-14 06:19:25.196466 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196477 | orchestrator | 2026-02-14 06:19:25.196499 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:19:25.196509 | orchestrator | Saturday 14 February 2026 06:19:11 +0000 (0:00:01.180) 0:42:23.482 ***** 2026-02-14 06:19:25.196520 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196531 | orchestrator | 2026-02-14 06:19:25.196542 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:19:25.196552 | orchestrator | Saturday 14 February 2026 06:19:12 +0000 (0:00:01.204) 0:42:24.687 ***** 2026-02-14 06:19:25.196563 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:19:25.196574 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:19:25.196585 | orchestrator | 2026-02-14 06:19:25.196595 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:19:25.196606 | orchestrator | Saturday 14 February 2026 06:19:14 +0000 (0:00:01.821) 0:42:26.508 ***** 2026-02-14 06:19:25.196617 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.196628 | orchestrator | 2026-02-14 06:19:25.196638 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:19:25.196649 | orchestrator | Saturday 14 February 2026 06:19:15 +0000 (0:00:01.615) 0:42:28.124 ***** 2026-02-14 06:19:25.196660 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196671 | orchestrator | 2026-02-14 06:19:25.196681 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:19:25.196692 | orchestrator | Saturday 14 February 2026 06:19:17 +0000 (0:00:01.205) 0:42:29.329 ***** 2026-02-14 06:19:25.196703 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196713 | orchestrator | 2026-02-14 06:19:25.196752 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:19:25.196764 | orchestrator | Saturday 14 February 2026 06:19:17 +0000 (0:00:00.801) 0:42:30.131 ***** 2026-02-14 06:19:25.196775 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196786 | orchestrator | 2026-02-14 06:19:25.196798 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:19:25.196808 | orchestrator | Saturday 14 February 2026 06:19:18 +0000 (0:00:00.827) 0:42:30.958 ***** 2026-02-14 06:19:25.196819 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-14 06:19:25.196830 | orchestrator | 2026-02-14 06:19:25.196840 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:19:25.196851 | orchestrator | Saturday 14 February 2026 06:19:19 +0000 (0:00:01.171) 0:42:32.129 ***** 2026-02-14 06:19:25.196862 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:19:25.196873 | orchestrator | 2026-02-14 06:19:25.196884 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:19:25.196895 | orchestrator | Saturday 14 February 2026 06:19:21 +0000 (0:00:01.820) 0:42:33.949 ***** 2026-02-14 06:19:25.196905 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:19:25.196916 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:19:25.196927 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:19:25.196937 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196948 | orchestrator | 2026-02-14 06:19:25.196959 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:19:25.196970 | orchestrator | Saturday 14 February 2026 06:19:22 +0000 (0:00:01.188) 0:42:35.138 ***** 2026-02-14 06:19:25.196980 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:19:25.196991 | orchestrator | 2026-02-14 06:19:25.197002 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:19:25.197013 | orchestrator | Saturday 14 February 2026 06:19:23 +0000 (0:00:01.155) 0:42:36.294 ***** 2026-02-14 06:19:25.197030 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.743764 | orchestrator | 2026-02-14 06:20:09.743938 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:20:09.743989 | orchestrator | Saturday 14 February 2026 06:19:25 +0000 (0:00:01.216) 0:42:37.511 ***** 2026-02-14 06:20:09.744009 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744029 | orchestrator | 2026-02-14 06:20:09.744047 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:20:09.744062 | orchestrator | Saturday 14 February 2026 06:19:26 +0000 (0:00:01.182) 0:42:38.693 ***** 2026-02-14 06:20:09.744074 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744085 | orchestrator | 2026-02-14 06:20:09.744096 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:20:09.744107 | orchestrator | Saturday 14 February 2026 06:19:27 +0000 (0:00:01.197) 0:42:39.891 ***** 2026-02-14 06:20:09.744118 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744129 | orchestrator | 2026-02-14 06:20:09.744140 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:20:09.744151 | orchestrator | Saturday 14 February 2026 06:19:28 +0000 (0:00:00.820) 0:42:40.711 ***** 2026-02-14 06:20:09.744162 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:09.744173 | orchestrator | 2026-02-14 06:20:09.744184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:20:09.744196 | orchestrator | Saturday 14 February 2026 06:19:30 +0000 (0:00:02.120) 0:42:42.832 ***** 2026-02-14 06:20:09.744206 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:09.744217 | orchestrator | 2026-02-14 06:20:09.744228 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:20:09.744238 | orchestrator | Saturday 14 February 2026 06:19:31 +0000 (0:00:00.866) 0:42:43.698 ***** 2026-02-14 06:20:09.744249 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-14 06:20:09.744260 | orchestrator | 2026-02-14 06:20:09.744271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:20:09.744281 | orchestrator | Saturday 14 February 2026 06:19:32 +0000 (0:00:01.148) 0:42:44.846 ***** 2026-02-14 06:20:09.744293 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744307 | orchestrator | 2026-02-14 06:20:09.744320 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:20:09.744332 | orchestrator | Saturday 14 February 2026 06:19:33 +0000 (0:00:01.148) 0:42:45.995 ***** 2026-02-14 06:20:09.744345 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744357 | orchestrator | 2026-02-14 06:20:09.744369 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:20:09.744382 | orchestrator | Saturday 14 February 2026 06:19:34 +0000 (0:00:01.204) 0:42:47.199 ***** 2026-02-14 06:20:09.744394 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744406 | orchestrator | 2026-02-14 06:20:09.744420 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:20:09.744433 | orchestrator | Saturday 14 February 2026 06:19:36 +0000 (0:00:01.164) 0:42:48.363 ***** 2026-02-14 06:20:09.744445 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744457 | orchestrator | 2026-02-14 06:20:09.744470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:20:09.744483 | orchestrator | Saturday 14 February 2026 06:19:37 +0000 (0:00:01.169) 0:42:49.533 ***** 2026-02-14 06:20:09.744495 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744507 | orchestrator | 2026-02-14 06:20:09.744520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:20:09.744533 | orchestrator | Saturday 14 February 2026 06:19:38 +0000 (0:00:01.178) 0:42:50.712 ***** 2026-02-14 06:20:09.744545 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744557 | orchestrator | 2026-02-14 06:20:09.744570 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:20:09.744582 | orchestrator | Saturday 14 February 2026 06:19:39 +0000 (0:00:01.414) 0:42:52.126 ***** 2026-02-14 06:20:09.744611 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744635 | orchestrator | 2026-02-14 06:20:09.744648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:20:09.744661 | orchestrator | Saturday 14 February 2026 06:19:41 +0000 (0:00:01.217) 0:42:53.343 ***** 2026-02-14 06:20:09.744673 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.744683 | orchestrator | 2026-02-14 06:20:09.744694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:20:09.744705 | orchestrator | Saturday 14 February 2026 06:19:42 +0000 (0:00:01.278) 0:42:54.622 ***** 2026-02-14 06:20:09.744716 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:09.744727 | orchestrator | 2026-02-14 06:20:09.744737 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:20:09.744748 | orchestrator | Saturday 14 February 2026 06:19:43 +0000 (0:00:00.800) 0:42:55.422 ***** 2026-02-14 06:20:09.744759 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-14 06:20:09.744770 | orchestrator | 2026-02-14 06:20:09.744781 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:20:09.744792 | orchestrator | Saturday 14 February 2026 06:19:44 +0000 (0:00:01.134) 0:42:56.557 ***** 2026-02-14 06:20:09.744802 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-14 06:20:09.744821 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-14 06:20:09.744878 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-14 06:20:09.744895 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-14 06:20:09.744914 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-14 06:20:09.744933 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-14 06:20:09.744952 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-14 06:20:09.744966 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:20:09.744977 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:20:09.745008 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:20:09.745020 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:20:09.745030 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:20:09.745041 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:20:09.745052 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:20:09.745062 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-14 06:20:09.745073 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-14 06:20:09.745084 | orchestrator | 2026-02-14 06:20:09.745095 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:20:09.745106 | orchestrator | Saturday 14 February 2026 06:19:50 +0000 (0:00:06.138) 0:43:02.695 ***** 2026-02-14 06:20:09.745116 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-14 06:20:09.745127 | orchestrator | 2026-02-14 06:20:09.745138 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:20:09.745149 | orchestrator | Saturday 14 February 2026 06:19:51 +0000 (0:00:01.143) 0:43:03.839 ***** 2026-02-14 06:20:09.745160 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:20:09.745172 | orchestrator | 2026-02-14 06:20:09.745183 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:20:09.745194 | orchestrator | Saturday 14 February 2026 06:19:53 +0000 (0:00:01.498) 0:43:05.338 ***** 2026-02-14 06:20:09.745205 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:20:09.745215 | orchestrator | 2026-02-14 06:20:09.745226 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:20:09.745246 | orchestrator | Saturday 14 February 2026 06:19:55 +0000 (0:00:02.713) 0:43:08.051 ***** 2026-02-14 06:20:09.745257 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745268 | orchestrator | 2026-02-14 06:20:09.745279 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:20:09.745290 | orchestrator | Saturday 14 February 2026 06:19:56 +0000 (0:00:00.825) 0:43:08.877 ***** 2026-02-14 06:20:09.745300 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745311 | orchestrator | 2026-02-14 06:20:09.745322 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:20:09.745333 | orchestrator | Saturday 14 February 2026 06:19:57 +0000 (0:00:00.843) 0:43:09.720 ***** 2026-02-14 06:20:09.745343 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745354 | orchestrator | 2026-02-14 06:20:09.745365 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:20:09.745376 | orchestrator | Saturday 14 February 2026 06:19:58 +0000 (0:00:00.792) 0:43:10.513 ***** 2026-02-14 06:20:09.745386 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745397 | orchestrator | 2026-02-14 06:20:09.745408 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:20:09.745418 | orchestrator | Saturday 14 February 2026 06:19:59 +0000 (0:00:00.824) 0:43:11.337 ***** 2026-02-14 06:20:09.745429 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745440 | orchestrator | 2026-02-14 06:20:09.745451 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:20:09.745462 | orchestrator | Saturday 14 February 2026 06:19:59 +0000 (0:00:00.803) 0:43:12.141 ***** 2026-02-14 06:20:09.745472 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745483 | orchestrator | 2026-02-14 06:20:09.745494 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:20:09.745511 | orchestrator | Saturday 14 February 2026 06:20:00 +0000 (0:00:00.856) 0:43:12.997 ***** 2026-02-14 06:20:09.745522 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745533 | orchestrator | 2026-02-14 06:20:09.745544 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:20:09.745554 | orchestrator | Saturday 14 February 2026 06:20:01 +0000 (0:00:00.817) 0:43:13.815 ***** 2026-02-14 06:20:09.745565 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745576 | orchestrator | 2026-02-14 06:20:09.745587 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:20:09.745598 | orchestrator | Saturday 14 February 2026 06:20:02 +0000 (0:00:00.789) 0:43:14.604 ***** 2026-02-14 06:20:09.745608 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745619 | orchestrator | 2026-02-14 06:20:09.745629 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:20:09.745640 | orchestrator | Saturday 14 February 2026 06:20:03 +0000 (0:00:00.782) 0:43:15.387 ***** 2026-02-14 06:20:09.745651 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:09.745662 | orchestrator | 2026-02-14 06:20:09.745673 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:20:09.745683 | orchestrator | Saturday 14 February 2026 06:20:03 +0000 (0:00:00.765) 0:43:16.153 ***** 2026-02-14 06:20:09.745694 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:09.745705 | orchestrator | 2026-02-14 06:20:09.745716 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:20:09.745727 | orchestrator | Saturday 14 February 2026 06:20:04 +0000 (0:00:00.888) 0:43:17.042 ***** 2026-02-14 06:20:09.745737 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:20:09.745748 | orchestrator | 2026-02-14 06:20:09.745759 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:20:09.745770 | orchestrator | Saturday 14 February 2026 06:20:08 +0000 (0:00:04.134) 0:43:21.176 ***** 2026-02-14 06:20:09.745787 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:20:52.039326 | orchestrator | 2026-02-14 06:20:52.039445 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:20:52.039464 | orchestrator | Saturday 14 February 2026 06:20:09 +0000 (0:00:00.882) 0:43:22.058 ***** 2026-02-14 06:20:52.039479 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-14 06:20:52.039493 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-14 06:20:52.039506 | orchestrator | 2026-02-14 06:20:52.039518 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:20:52.039529 | orchestrator | Saturday 14 February 2026 06:20:16 +0000 (0:00:07.206) 0:43:29.265 ***** 2026-02-14 06:20:52.039541 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039553 | orchestrator | 2026-02-14 06:20:52.039564 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:20:52.039575 | orchestrator | Saturday 14 February 2026 06:20:17 +0000 (0:00:00.799) 0:43:30.064 ***** 2026-02-14 06:20:52.039586 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039597 | orchestrator | 2026-02-14 06:20:52.039609 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:20:52.039621 | orchestrator | Saturday 14 February 2026 06:20:18 +0000 (0:00:00.918) 0:43:30.983 ***** 2026-02-14 06:20:52.039632 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039643 | orchestrator | 2026-02-14 06:20:52.039654 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:20:52.039665 | orchestrator | Saturday 14 February 2026 06:20:19 +0000 (0:00:00.858) 0:43:31.841 ***** 2026-02-14 06:20:52.039676 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039687 | orchestrator | 2026-02-14 06:20:52.039698 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:20:52.039709 | orchestrator | Saturday 14 February 2026 06:20:20 +0000 (0:00:00.874) 0:43:32.716 ***** 2026-02-14 06:20:52.039720 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039731 | orchestrator | 2026-02-14 06:20:52.039742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:20:52.039753 | orchestrator | Saturday 14 February 2026 06:20:21 +0000 (0:00:00.857) 0:43:33.573 ***** 2026-02-14 06:20:52.039765 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.039778 | orchestrator | 2026-02-14 06:20:52.039789 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:20:52.039800 | orchestrator | Saturday 14 February 2026 06:20:22 +0000 (0:00:00.914) 0:43:34.488 ***** 2026-02-14 06:20:52.039812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:20:52.039824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:20:52.039835 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:20:52.039846 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.039859 | orchestrator | 2026-02-14 06:20:52.039873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:20:52.039902 | orchestrator | Saturday 14 February 2026 06:20:23 +0000 (0:00:01.182) 0:43:35.670 ***** 2026-02-14 06:20:52.039916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:20:52.039976 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:20:52.040017 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:20:52.040031 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.040045 | orchestrator | 2026-02-14 06:20:52.040058 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:20:52.040071 | orchestrator | Saturday 14 February 2026 06:20:24 +0000 (0:00:01.177) 0:43:36.847 ***** 2026-02-14 06:20:52.040084 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:20:52.040096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:20:52.040108 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:20:52.040121 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.040133 | orchestrator | 2026-02-14 06:20:52.040146 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:20:52.040159 | orchestrator | Saturday 14 February 2026 06:20:25 +0000 (0:00:01.151) 0:43:37.999 ***** 2026-02-14 06:20:52.040172 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.040185 | orchestrator | 2026-02-14 06:20:52.040198 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:20:52.040209 | orchestrator | Saturday 14 February 2026 06:20:26 +0000 (0:00:00.827) 0:43:38.827 ***** 2026-02-14 06:20:52.040220 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:20:52.040231 | orchestrator | 2026-02-14 06:20:52.040242 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:20:52.040253 | orchestrator | Saturday 14 February 2026 06:20:27 +0000 (0:00:01.038) 0:43:39.866 ***** 2026-02-14 06:20:52.040264 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.040275 | orchestrator | 2026-02-14 06:20:52.040286 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-14 06:20:52.040297 | orchestrator | Saturday 14 February 2026 06:20:28 +0000 (0:00:01.438) 0:43:41.304 ***** 2026-02-14 06:20:52.040308 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.040319 | orchestrator | 2026-02-14 06:20:52.040349 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:20:52.040361 | orchestrator | Saturday 14 February 2026 06:20:29 +0000 (0:00:00.997) 0:43:42.302 ***** 2026-02-14 06:20:52.040372 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:20:52.040384 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:20:52.040395 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:20:52.040406 | orchestrator | 2026-02-14 06:20:52.040417 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-14 06:20:52.040428 | orchestrator | Saturday 14 February 2026 06:20:31 +0000 (0:00:01.345) 0:43:43.648 ***** 2026-02-14 06:20:52.040439 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-14 06:20:52.040450 | orchestrator | 2026-02-14 06:20:52.040460 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-14 06:20:52.040471 | orchestrator | Saturday 14 February 2026 06:20:32 +0000 (0:00:01.121) 0:43:44.769 ***** 2026-02-14 06:20:52.040482 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.040493 | orchestrator | 2026-02-14 06:20:52.040504 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-14 06:20:52.040515 | orchestrator | Saturday 14 February 2026 06:20:33 +0000 (0:00:01.214) 0:43:45.984 ***** 2026-02-14 06:20:52.040526 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.040537 | orchestrator | 2026-02-14 06:20:52.040548 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-14 06:20:52.040559 | orchestrator | Saturday 14 February 2026 06:20:34 +0000 (0:00:01.124) 0:43:47.108 ***** 2026-02-14 06:20:52.040570 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.040581 | orchestrator | 2026-02-14 06:20:52.040592 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-14 06:20:52.040602 | orchestrator | Saturday 14 February 2026 06:20:36 +0000 (0:00:01.469) 0:43:48.578 ***** 2026-02-14 06:20:52.040622 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.040632 | orchestrator | 2026-02-14 06:20:52.040644 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-14 06:20:52.040654 | orchestrator | Saturday 14 February 2026 06:20:37 +0000 (0:00:01.156) 0:43:49.735 ***** 2026-02-14 06:20:52.040665 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 06:20:52.040677 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 06:20:52.040688 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 06:20:52.040698 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 06:20:52.040709 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 06:20:52.040720 | orchestrator | 2026-02-14 06:20:52.040731 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-14 06:20:52.040742 | orchestrator | Saturday 14 February 2026 06:20:39 +0000 (0:00:02.543) 0:43:52.278 ***** 2026-02-14 06:20:52.040753 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.040764 | orchestrator | 2026-02-14 06:20:52.040775 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-14 06:20:52.040786 | orchestrator | Saturday 14 February 2026 06:20:40 +0000 (0:00:00.830) 0:43:53.109 ***** 2026-02-14 06:20:52.040797 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-14 06:20:52.040808 | orchestrator | 2026-02-14 06:20:52.040818 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-14 06:20:52.040836 | orchestrator | Saturday 14 February 2026 06:20:41 +0000 (0:00:01.137) 0:43:54.247 ***** 2026-02-14 06:20:52.040847 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 06:20:52.040858 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-14 06:20:52.040869 | orchestrator | 2026-02-14 06:20:52.040880 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-14 06:20:52.040891 | orchestrator | Saturday 14 February 2026 06:20:43 +0000 (0:00:01.876) 0:43:56.123 ***** 2026-02-14 06:20:52.040902 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:20:52.040913 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:20:52.040924 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:20:52.040960 | orchestrator | 2026-02-14 06:20:52.040971 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:20:52.040982 | orchestrator | Saturday 14 February 2026 06:20:47 +0000 (0:00:04.011) 0:44:00.135 ***** 2026-02-14 06:20:52.040993 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-14 06:20:52.041004 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:20:52.041015 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:20:52.041026 | orchestrator | 2026-02-14 06:20:52.041037 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-14 06:20:52.041047 | orchestrator | Saturday 14 February 2026 06:20:49 +0000 (0:00:01.666) 0:44:01.801 ***** 2026-02-14 06:20:52.041058 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.041069 | orchestrator | 2026-02-14 06:20:52.041080 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-14 06:20:52.041091 | orchestrator | Saturday 14 February 2026 06:20:50 +0000 (0:00:00.944) 0:44:02.746 ***** 2026-02-14 06:20:52.041102 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.041113 | orchestrator | 2026-02-14 06:20:52.041124 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-14 06:20:52.041135 | orchestrator | Saturday 14 February 2026 06:20:51 +0000 (0:00:00.818) 0:44:03.565 ***** 2026-02-14 06:20:52.041146 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:20:52.041157 | orchestrator | 2026-02-14 06:20:52.041182 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-14 06:21:59.256325 | orchestrator | Saturday 14 February 2026 06:20:52 +0000 (0:00:00.786) 0:44:04.352 ***** 2026-02-14 06:21:59.256476 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-14 06:21:59.256496 | orchestrator | 2026-02-14 06:21:59.256509 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-14 06:21:59.256520 | orchestrator | Saturday 14 February 2026 06:20:53 +0000 (0:00:01.133) 0:44:05.485 ***** 2026-02-14 06:21:59.256531 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:21:59.256543 | orchestrator | 2026-02-14 06:21:59.256555 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-14 06:21:59.256566 | orchestrator | Saturday 14 February 2026 06:20:54 +0000 (0:00:01.422) 0:44:06.907 ***** 2026-02-14 06:21:59.256576 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:21:59.256587 | orchestrator | 2026-02-14 06:21:59.256598 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-14 06:21:59.256609 | orchestrator | Saturday 14 February 2026 06:20:57 +0000 (0:00:03.320) 0:44:10.228 ***** 2026-02-14 06:21:59.256620 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-14 06:21:59.256631 | orchestrator | 2026-02-14 06:21:59.256642 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-14 06:21:59.256652 | orchestrator | Saturday 14 February 2026 06:20:59 +0000 (0:00:01.148) 0:44:11.376 ***** 2026-02-14 06:21:59.256663 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:21:59.256674 | orchestrator | 2026-02-14 06:21:59.256684 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-14 06:21:59.256695 | orchestrator | Saturday 14 February 2026 06:21:01 +0000 (0:00:01.993) 0:44:13.370 ***** 2026-02-14 06:21:59.256706 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:21:59.256717 | orchestrator | 2026-02-14 06:21:59.256727 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-14 06:21:59.256738 | orchestrator | Saturday 14 February 2026 06:21:02 +0000 (0:00:01.957) 0:44:15.328 ***** 2026-02-14 06:21:59.256749 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:21:59.256761 | orchestrator | 2026-02-14 06:21:59.256772 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-14 06:21:59.256782 | orchestrator | Saturday 14 February 2026 06:21:05 +0000 (0:00:02.219) 0:44:17.547 ***** 2026-02-14 06:21:59.256793 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.256804 | orchestrator | 2026-02-14 06:21:59.256815 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-14 06:21:59.256826 | orchestrator | Saturday 14 February 2026 06:21:06 +0000 (0:00:01.193) 0:44:18.741 ***** 2026-02-14 06:21:59.256836 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.256849 | orchestrator | 2026-02-14 06:21:59.256861 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-14 06:21:59.256874 | orchestrator | Saturday 14 February 2026 06:21:07 +0000 (0:00:01.168) 0:44:19.909 ***** 2026-02-14 06:21:59.256886 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-14 06:21:59.256898 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-14 06:21:59.256911 | orchestrator | 2026-02-14 06:21:59.256923 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-14 06:21:59.256935 | orchestrator | Saturday 14 February 2026 06:21:09 +0000 (0:00:01.904) 0:44:21.814 ***** 2026-02-14 06:21:59.256948 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-14 06:21:59.256959 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-14 06:21:59.256971 | orchestrator | 2026-02-14 06:21:59.256984 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-14 06:21:59.256997 | orchestrator | Saturday 14 February 2026 06:21:12 +0000 (0:00:02.926) 0:44:24.740 ***** 2026-02-14 06:21:59.257009 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-14 06:21:59.257021 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-14 06:21:59.257033 | orchestrator | 2026-02-14 06:21:59.257110 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-14 06:21:59.257126 | orchestrator | Saturday 14 February 2026 06:21:16 +0000 (0:00:04.190) 0:44:28.930 ***** 2026-02-14 06:21:59.257139 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.257151 | orchestrator | 2026-02-14 06:21:59.257164 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-14 06:21:59.257256 | orchestrator | Saturday 14 February 2026 06:21:17 +0000 (0:00:00.953) 0:44:29.884 ***** 2026-02-14 06:21:59.257271 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.257283 | orchestrator | 2026-02-14 06:21:59.257294 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-14 06:21:59.257305 | orchestrator | Saturday 14 February 2026 06:21:18 +0000 (0:00:00.903) 0:44:30.787 ***** 2026-02-14 06:21:59.257315 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.257326 | orchestrator | 2026-02-14 06:21:59.257337 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-14 06:21:59.257348 | orchestrator | Saturday 14 February 2026 06:21:19 +0000 (0:00:00.928) 0:44:31.716 ***** 2026-02-14 06:21:59.257358 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.257369 | orchestrator | 2026-02-14 06:21:59.257380 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-14 06:21:59.257391 | orchestrator | Saturday 14 February 2026 06:21:20 +0000 (0:00:00.819) 0:44:32.536 ***** 2026-02-14 06:21:59.257401 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:21:59.257416 | orchestrator | 2026-02-14 06:21:59.257434 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-14 06:21:59.257452 | orchestrator | Saturday 14 February 2026 06:21:21 +0000 (0:00:00.835) 0:44:33.372 ***** 2026-02-14 06:21:59.257470 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-14 06:21:59.257492 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-14 06:21:59.257504 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-14 06:21:59.257537 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-14 06:21:59.257549 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-02-14 06:21:59.257560 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:21:59.257571 | orchestrator | 2026-02-14 06:21:59.257582 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-14 06:21:59.257592 | orchestrator | 2026-02-14 06:21:59.257603 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:21:59.257614 | orchestrator | Saturday 14 February 2026 06:21:38 +0000 (0:00:17.239) 0:44:50.612 ***** 2026-02-14 06:21:59.257624 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-14 06:21:59.257635 | orchestrator | 2026-02-14 06:21:59.257645 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:21:59.257656 | orchestrator | Saturday 14 February 2026 06:21:39 +0000 (0:00:01.136) 0:44:51.749 ***** 2026-02-14 06:21:59.257667 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257677 | orchestrator | 2026-02-14 06:21:59.257688 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:21:59.257699 | orchestrator | Saturday 14 February 2026 06:21:40 +0000 (0:00:01.480) 0:44:53.230 ***** 2026-02-14 06:21:59.257710 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257720 | orchestrator | 2026-02-14 06:21:59.257731 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:21:59.257742 | orchestrator | Saturday 14 February 2026 06:21:42 +0000 (0:00:01.109) 0:44:54.339 ***** 2026-02-14 06:21:59.257753 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257763 | orchestrator | 2026-02-14 06:21:59.257787 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:21:59.257798 | orchestrator | Saturday 14 February 2026 06:21:43 +0000 (0:00:01.448) 0:44:55.788 ***** 2026-02-14 06:21:59.257809 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257820 | orchestrator | 2026-02-14 06:21:59.257830 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:21:59.257841 | orchestrator | Saturday 14 February 2026 06:21:44 +0000 (0:00:01.137) 0:44:56.925 ***** 2026-02-14 06:21:59.257852 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257862 | orchestrator | 2026-02-14 06:21:59.257873 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:21:59.257884 | orchestrator | Saturday 14 February 2026 06:21:45 +0000 (0:00:01.229) 0:44:58.155 ***** 2026-02-14 06:21:59.257894 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257905 | orchestrator | 2026-02-14 06:21:59.257916 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:21:59.257927 | orchestrator | Saturday 14 February 2026 06:21:46 +0000 (0:00:01.169) 0:44:59.325 ***** 2026-02-14 06:21:59.257937 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:21:59.257948 | orchestrator | 2026-02-14 06:21:59.257959 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:21:59.257969 | orchestrator | Saturday 14 February 2026 06:21:48 +0000 (0:00:01.153) 0:45:00.479 ***** 2026-02-14 06:21:59.257980 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.257991 | orchestrator | 2026-02-14 06:21:59.258001 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:21:59.258012 | orchestrator | Saturday 14 February 2026 06:21:49 +0000 (0:00:01.122) 0:45:01.601 ***** 2026-02-14 06:21:59.258111 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:21:59.258123 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:21:59.258141 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:21:59.258153 | orchestrator | 2026-02-14 06:21:59.258163 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:21:59.258174 | orchestrator | Saturday 14 February 2026 06:21:51 +0000 (0:00:02.057) 0:45:03.659 ***** 2026-02-14 06:21:59.258185 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:21:59.258195 | orchestrator | 2026-02-14 06:21:59.258206 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:21:59.258217 | orchestrator | Saturday 14 February 2026 06:21:52 +0000 (0:00:01.283) 0:45:04.942 ***** 2026-02-14 06:21:59.258228 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:21:59.258238 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:21:59.258249 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:21:59.258259 | orchestrator | 2026-02-14 06:21:59.258270 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:21:59.258280 | orchestrator | Saturday 14 February 2026 06:21:56 +0000 (0:00:03.467) 0:45:08.409 ***** 2026-02-14 06:21:59.258291 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:21:59.258303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:21:59.258313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:21:59.258324 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:21:59.258335 | orchestrator | 2026-02-14 06:21:59.258345 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:21:59.258357 | orchestrator | Saturday 14 February 2026 06:21:57 +0000 (0:00:01.459) 0:45:09.869 ***** 2026-02-14 06:21:59.258369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:21:59.258400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:22:19.332806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:22:19.332920 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.332934 | orchestrator | 2026-02-14 06:22:19.332945 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:22:19.332956 | orchestrator | Saturday 14 February 2026 06:21:59 +0000 (0:00:01.700) 0:45:11.570 ***** 2026-02-14 06:22:19.332968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:19.332981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:19.332991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:19.333001 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333010 | orchestrator | 2026-02-14 06:22:19.333020 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:22:19.333030 | orchestrator | Saturday 14 February 2026 06:22:00 +0000 (0:00:01.203) 0:45:12.774 ***** 2026-02-14 06:22:19.333058 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:21:53.562438', 'end': '2026-02-14 06:21:53.624778', 'delta': '0:00:00.062340', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:22:19.333072 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:21:54.164757', 'end': '2026-02-14 06:21:54.219434', 'delta': '0:00:00.054677', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:22:19.333181 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:21:54.738630', 'end': '2026-02-14 06:21:54.794619', 'delta': '0:00:00.055989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:22:19.333195 | orchestrator | 2026-02-14 06:22:19.333206 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:22:19.333215 | orchestrator | Saturday 14 February 2026 06:22:01 +0000 (0:00:01.242) 0:45:14.017 ***** 2026-02-14 06:22:19.333225 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333236 | orchestrator | 2026-02-14 06:22:19.333245 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:22:19.333255 | orchestrator | Saturday 14 February 2026 06:22:02 +0000 (0:00:01.298) 0:45:15.316 ***** 2026-02-14 06:22:19.333265 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333274 | orchestrator | 2026-02-14 06:22:19.333284 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:22:19.333294 | orchestrator | Saturday 14 February 2026 06:22:04 +0000 (0:00:01.262) 0:45:16.579 ***** 2026-02-14 06:22:19.333303 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333313 | orchestrator | 2026-02-14 06:22:19.333323 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:22:19.333333 | orchestrator | Saturday 14 February 2026 06:22:05 +0000 (0:00:01.205) 0:45:17.784 ***** 2026-02-14 06:22:19.333344 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:22:19.333355 | orchestrator | 2026-02-14 06:22:19.333366 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:22:19.333377 | orchestrator | Saturday 14 February 2026 06:22:07 +0000 (0:00:01.924) 0:45:19.708 ***** 2026-02-14 06:22:19.333388 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333399 | orchestrator | 2026-02-14 06:22:19.333410 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:22:19.333421 | orchestrator | Saturday 14 February 2026 06:22:08 +0000 (0:00:01.188) 0:45:20.896 ***** 2026-02-14 06:22:19.333432 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333443 | orchestrator | 2026-02-14 06:22:19.333454 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:22:19.333465 | orchestrator | Saturday 14 February 2026 06:22:09 +0000 (0:00:01.183) 0:45:22.079 ***** 2026-02-14 06:22:19.333476 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333487 | orchestrator | 2026-02-14 06:22:19.333498 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:22:19.333510 | orchestrator | Saturday 14 February 2026 06:22:11 +0000 (0:00:01.259) 0:45:23.339 ***** 2026-02-14 06:22:19.333520 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333531 | orchestrator | 2026-02-14 06:22:19.333542 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:22:19.333554 | orchestrator | Saturday 14 February 2026 06:22:12 +0000 (0:00:01.130) 0:45:24.470 ***** 2026-02-14 06:22:19.333565 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333576 | orchestrator | 2026-02-14 06:22:19.333587 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:22:19.333598 | orchestrator | Saturday 14 February 2026 06:22:13 +0000 (0:00:01.101) 0:45:25.571 ***** 2026-02-14 06:22:19.333609 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333620 | orchestrator | 2026-02-14 06:22:19.333630 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:22:19.333653 | orchestrator | Saturday 14 February 2026 06:22:14 +0000 (0:00:01.203) 0:45:26.775 ***** 2026-02-14 06:22:19.333664 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333675 | orchestrator | 2026-02-14 06:22:19.333686 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:22:19.333701 | orchestrator | Saturday 14 February 2026 06:22:15 +0000 (0:00:01.124) 0:45:27.899 ***** 2026-02-14 06:22:19.333711 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333721 | orchestrator | 2026-02-14 06:22:19.333730 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:22:19.333740 | orchestrator | Saturday 14 February 2026 06:22:16 +0000 (0:00:01.156) 0:45:29.055 ***** 2026-02-14 06:22:19.333749 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:19.333759 | orchestrator | 2026-02-14 06:22:19.333769 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:22:19.333779 | orchestrator | Saturday 14 February 2026 06:22:17 +0000 (0:00:01.125) 0:45:30.181 ***** 2026-02-14 06:22:19.333789 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:19.333799 | orchestrator | 2026-02-14 06:22:19.333808 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:22:19.333818 | orchestrator | Saturday 14 February 2026 06:22:19 +0000 (0:00:01.228) 0:45:31.409 ***** 2026-02-14 06:22:19.333828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.333845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}})  2026-02-14 06:22:19.338370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:22:19.338413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}})  2026-02-14 06:22:19.338439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:22:19.338481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}})  2026-02-14 06:22:19.338534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}})  2026-02-14 06:22:19.338552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:19.338580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:22:20.795824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:20.795929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:22:20.795971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:22:20.795987 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:20.796000 | orchestrator | 2026-02-14 06:22:20.796012 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:22:20.796024 | orchestrator | Saturday 14 February 2026 06:22:20 +0000 (0:00:01.483) 0:45:32.893 ***** 2026-02-14 06:22:20.796051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796111 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796241 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796271 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:20.796292 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:22:26.133783 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:22:26.133796 | orchestrator | 2026-02-14 06:22:26.133807 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:22:26.133819 | orchestrator | Saturday 14 February 2026 06:22:22 +0000 (0:00:01.451) 0:45:34.345 ***** 2026-02-14 06:22:26.133830 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:26.133842 | orchestrator | 2026-02-14 06:22:26.133853 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:22:26.133864 | orchestrator | Saturday 14 February 2026 06:22:23 +0000 (0:00:01.469) 0:45:35.814 ***** 2026-02-14 06:22:26.133874 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:26.133885 | orchestrator | 2026-02-14 06:22:26.133896 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:22:26.133907 | orchestrator | Saturday 14 February 2026 06:22:24 +0000 (0:00:01.135) 0:45:36.950 ***** 2026-02-14 06:22:26.133924 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:22:26.133935 | orchestrator | 2026-02-14 06:22:26.133945 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:22:26.133963 | orchestrator | Saturday 14 February 2026 06:22:26 +0000 (0:00:01.500) 0:45:38.451 ***** 2026-02-14 06:23:10.081962 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082166 | orchestrator | 2026-02-14 06:23:10.082188 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:23:10.082202 | orchestrator | Saturday 14 February 2026 06:22:27 +0000 (0:00:01.227) 0:45:39.679 ***** 2026-02-14 06:23:10.082272 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082285 | orchestrator | 2026-02-14 06:23:10.082297 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:23:10.082308 | orchestrator | Saturday 14 February 2026 06:22:28 +0000 (0:00:01.241) 0:45:40.921 ***** 2026-02-14 06:23:10.082319 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082330 | orchestrator | 2026-02-14 06:23:10.082341 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:23:10.082352 | orchestrator | Saturday 14 February 2026 06:22:29 +0000 (0:00:01.185) 0:45:42.106 ***** 2026-02-14 06:23:10.082364 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 06:23:10.082375 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 06:23:10.082386 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 06:23:10.082397 | orchestrator | 2026-02-14 06:23:10.082407 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:23:10.082418 | orchestrator | Saturday 14 February 2026 06:22:32 +0000 (0:00:02.324) 0:45:44.431 ***** 2026-02-14 06:23:10.082429 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:23:10.082442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:23:10.082455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:23:10.082468 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082481 | orchestrator | 2026-02-14 06:23:10.082493 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:23:10.082506 | orchestrator | Saturday 14 February 2026 06:22:33 +0000 (0:00:01.210) 0:45:45.642 ***** 2026-02-14 06:23:10.082519 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-14 06:23:10.082532 | orchestrator | 2026-02-14 06:23:10.082545 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:23:10.082559 | orchestrator | Saturday 14 February 2026 06:22:34 +0000 (0:00:01.146) 0:45:46.788 ***** 2026-02-14 06:23:10.082571 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082584 | orchestrator | 2026-02-14 06:23:10.082596 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:23:10.082609 | orchestrator | Saturday 14 February 2026 06:22:35 +0000 (0:00:01.200) 0:45:47.989 ***** 2026-02-14 06:23:10.082621 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082634 | orchestrator | 2026-02-14 06:23:10.082646 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:23:10.082679 | orchestrator | Saturday 14 February 2026 06:22:36 +0000 (0:00:01.155) 0:45:49.144 ***** 2026-02-14 06:23:10.082693 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082706 | orchestrator | 2026-02-14 06:23:10.082717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:23:10.082728 | orchestrator | Saturday 14 February 2026 06:22:37 +0000 (0:00:01.105) 0:45:50.250 ***** 2026-02-14 06:23:10.082739 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.082750 | orchestrator | 2026-02-14 06:23:10.082761 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:23:10.082772 | orchestrator | Saturday 14 February 2026 06:22:39 +0000 (0:00:01.251) 0:45:51.501 ***** 2026-02-14 06:23:10.082806 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:23:10.082817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:23:10.082828 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:23:10.082839 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082849 | orchestrator | 2026-02-14 06:23:10.082860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:23:10.082871 | orchestrator | Saturday 14 February 2026 06:22:40 +0000 (0:00:01.568) 0:45:53.070 ***** 2026-02-14 06:23:10.082881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:23:10.082892 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:23:10.082902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:23:10.082913 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.082923 | orchestrator | 2026-02-14 06:23:10.082934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:23:10.082945 | orchestrator | Saturday 14 February 2026 06:22:42 +0000 (0:00:01.445) 0:45:54.515 ***** 2026-02-14 06:23:10.082955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:23:10.082969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:23:10.082993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:23:10.083022 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.083042 | orchestrator | 2026-02-14 06:23:10.083061 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:23:10.083078 | orchestrator | Saturday 14 February 2026 06:22:43 +0000 (0:00:01.438) 0:45:55.954 ***** 2026-02-14 06:23:10.083098 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083118 | orchestrator | 2026-02-14 06:23:10.083137 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:23:10.083158 | orchestrator | Saturday 14 February 2026 06:22:44 +0000 (0:00:01.164) 0:45:57.119 ***** 2026-02-14 06:23:10.083173 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:23:10.083183 | orchestrator | 2026-02-14 06:23:10.083194 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:23:10.083259 | orchestrator | Saturday 14 February 2026 06:22:46 +0000 (0:00:01.711) 0:45:58.831 ***** 2026-02-14 06:23:10.083304 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:23:10.083326 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:23:10.083345 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:23:10.083365 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:23:10.083385 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:23:10.083403 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:23:10.083421 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:23:10.083440 | orchestrator | 2026-02-14 06:23:10.083452 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:23:10.083462 | orchestrator | Saturday 14 February 2026 06:22:48 +0000 (0:00:02.189) 0:46:01.020 ***** 2026-02-14 06:23:10.083473 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:23:10.083483 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:23:10.083494 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:23:10.083504 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:23:10.083515 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:23:10.083538 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:23:10.083549 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:23:10.083560 | orchestrator | 2026-02-14 06:23:10.083571 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-14 06:23:10.083582 | orchestrator | Saturday 14 February 2026 06:22:51 +0000 (0:00:02.340) 0:46:03.360 ***** 2026-02-14 06:23:10.083592 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083603 | orchestrator | 2026-02-14 06:23:10.083613 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-14 06:23:10.083624 | orchestrator | Saturday 14 February 2026 06:22:52 +0000 (0:00:01.090) 0:46:04.451 ***** 2026-02-14 06:23:10.083635 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083646 | orchestrator | 2026-02-14 06:23:10.083656 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-14 06:23:10.083667 | orchestrator | Saturday 14 February 2026 06:22:52 +0000 (0:00:00.759) 0:46:05.211 ***** 2026-02-14 06:23:10.083677 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083688 | orchestrator | 2026-02-14 06:23:10.083698 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-14 06:23:10.083717 | orchestrator | Saturday 14 February 2026 06:22:53 +0000 (0:00:00.935) 0:46:06.147 ***** 2026-02-14 06:23:10.083728 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-14 06:23:10.083739 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-14 06:23:10.083750 | orchestrator | 2026-02-14 06:23:10.083761 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:23:10.083771 | orchestrator | Saturday 14 February 2026 06:22:58 +0000 (0:00:04.770) 0:46:10.917 ***** 2026-02-14 06:23:10.083782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-14 06:23:10.083793 | orchestrator | 2026-02-14 06:23:10.083804 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:23:10.083815 | orchestrator | Saturday 14 February 2026 06:22:59 +0000 (0:00:01.134) 0:46:12.051 ***** 2026-02-14 06:23:10.083825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-14 06:23:10.083836 | orchestrator | 2026-02-14 06:23:10.083847 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:23:10.083857 | orchestrator | Saturday 14 February 2026 06:23:00 +0000 (0:00:01.157) 0:46:13.209 ***** 2026-02-14 06:23:10.083868 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.083879 | orchestrator | 2026-02-14 06:23:10.083890 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:23:10.083900 | orchestrator | Saturday 14 February 2026 06:23:02 +0000 (0:00:01.182) 0:46:14.392 ***** 2026-02-14 06:23:10.083911 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083922 | orchestrator | 2026-02-14 06:23:10.083933 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:23:10.083943 | orchestrator | Saturday 14 February 2026 06:23:03 +0000 (0:00:01.479) 0:46:15.871 ***** 2026-02-14 06:23:10.083954 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.083965 | orchestrator | 2026-02-14 06:23:10.083975 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:23:10.083986 | orchestrator | Saturday 14 February 2026 06:23:05 +0000 (0:00:01.580) 0:46:17.451 ***** 2026-02-14 06:23:10.083997 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:10.084007 | orchestrator | 2026-02-14 06:23:10.084018 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:23:10.084028 | orchestrator | Saturday 14 February 2026 06:23:06 +0000 (0:00:01.507) 0:46:18.959 ***** 2026-02-14 06:23:10.084039 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.084050 | orchestrator | 2026-02-14 06:23:10.084061 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:23:10.084071 | orchestrator | Saturday 14 February 2026 06:23:07 +0000 (0:00:01.117) 0:46:20.077 ***** 2026-02-14 06:23:10.084089 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.084100 | orchestrator | 2026-02-14 06:23:10.084110 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:23:10.084121 | orchestrator | Saturday 14 February 2026 06:23:08 +0000 (0:00:01.155) 0:46:21.232 ***** 2026-02-14 06:23:10.084132 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:10.084144 | orchestrator | 2026-02-14 06:23:10.084174 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:23:50.355561 | orchestrator | Saturday 14 February 2026 06:23:10 +0000 (0:00:01.161) 0:46:22.394 ***** 2026-02-14 06:23:50.355677 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.355695 | orchestrator | 2026-02-14 06:23:50.355708 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:23:50.355719 | orchestrator | Saturday 14 February 2026 06:23:11 +0000 (0:00:01.499) 0:46:23.894 ***** 2026-02-14 06:23:50.355731 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.355742 | orchestrator | 2026-02-14 06:23:50.355753 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:23:50.355764 | orchestrator | Saturday 14 February 2026 06:23:13 +0000 (0:00:01.525) 0:46:25.419 ***** 2026-02-14 06:23:50.355775 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.355786 | orchestrator | 2026-02-14 06:23:50.355797 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:23:50.355807 | orchestrator | Saturday 14 February 2026 06:23:13 +0000 (0:00:00.762) 0:46:26.182 ***** 2026-02-14 06:23:50.355818 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.355829 | orchestrator | 2026-02-14 06:23:50.355840 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:23:50.355850 | orchestrator | Saturday 14 February 2026 06:23:14 +0000 (0:00:00.784) 0:46:26.967 ***** 2026-02-14 06:23:50.355861 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.355872 | orchestrator | 2026-02-14 06:23:50.355883 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:23:50.355893 | orchestrator | Saturday 14 February 2026 06:23:15 +0000 (0:00:00.794) 0:46:27.761 ***** 2026-02-14 06:23:50.355904 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.355915 | orchestrator | 2026-02-14 06:23:50.355925 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:23:50.355936 | orchestrator | Saturday 14 February 2026 06:23:16 +0000 (0:00:00.771) 0:46:28.532 ***** 2026-02-14 06:23:50.355946 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.355957 | orchestrator | 2026-02-14 06:23:50.355968 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:23:50.355979 | orchestrator | Saturday 14 February 2026 06:23:17 +0000 (0:00:00.804) 0:46:29.337 ***** 2026-02-14 06:23:50.355990 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356000 | orchestrator | 2026-02-14 06:23:50.356011 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:23:50.356022 | orchestrator | Saturday 14 February 2026 06:23:17 +0000 (0:00:00.783) 0:46:30.120 ***** 2026-02-14 06:23:50.356032 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356043 | orchestrator | 2026-02-14 06:23:50.356054 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:23:50.356064 | orchestrator | Saturday 14 February 2026 06:23:18 +0000 (0:00:00.875) 0:46:30.996 ***** 2026-02-14 06:23:50.356075 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356085 | orchestrator | 2026-02-14 06:23:50.356114 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:23:50.356127 | orchestrator | Saturday 14 February 2026 06:23:19 +0000 (0:00:00.798) 0:46:31.794 ***** 2026-02-14 06:23:50.356139 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.356152 | orchestrator | 2026-02-14 06:23:50.356164 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:23:50.356177 | orchestrator | Saturday 14 February 2026 06:23:20 +0000 (0:00:00.813) 0:46:32.608 ***** 2026-02-14 06:23:50.356218 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.356231 | orchestrator | 2026-02-14 06:23:50.356243 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:23:50.356255 | orchestrator | Saturday 14 February 2026 06:23:21 +0000 (0:00:00.828) 0:46:33.437 ***** 2026-02-14 06:23:50.356268 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356314 | orchestrator | 2026-02-14 06:23:50.356328 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:23:50.356341 | orchestrator | Saturday 14 February 2026 06:23:21 +0000 (0:00:00.810) 0:46:34.247 ***** 2026-02-14 06:23:50.356353 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356365 | orchestrator | 2026-02-14 06:23:50.356377 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:23:50.356390 | orchestrator | Saturday 14 February 2026 06:23:22 +0000 (0:00:00.770) 0:46:35.018 ***** 2026-02-14 06:23:50.356402 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356414 | orchestrator | 2026-02-14 06:23:50.356426 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:23:50.356438 | orchestrator | Saturday 14 February 2026 06:23:23 +0000 (0:00:00.821) 0:46:35.839 ***** 2026-02-14 06:23:50.356451 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356462 | orchestrator | 2026-02-14 06:23:50.356472 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:23:50.356483 | orchestrator | Saturday 14 February 2026 06:23:24 +0000 (0:00:00.800) 0:46:36.640 ***** 2026-02-14 06:23:50.356494 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356504 | orchestrator | 2026-02-14 06:23:50.356515 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:23:50.356525 | orchestrator | Saturday 14 February 2026 06:23:25 +0000 (0:00:00.776) 0:46:37.416 ***** 2026-02-14 06:23:50.356536 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356546 | orchestrator | 2026-02-14 06:23:50.356557 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:23:50.356568 | orchestrator | Saturday 14 February 2026 06:23:25 +0000 (0:00:00.777) 0:46:38.194 ***** 2026-02-14 06:23:50.356578 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356589 | orchestrator | 2026-02-14 06:23:50.356600 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:23:50.356611 | orchestrator | Saturday 14 February 2026 06:23:26 +0000 (0:00:00.772) 0:46:38.967 ***** 2026-02-14 06:23:50.356622 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356632 | orchestrator | 2026-02-14 06:23:50.356643 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:23:50.356654 | orchestrator | Saturday 14 February 2026 06:23:27 +0000 (0:00:00.772) 0:46:39.740 ***** 2026-02-14 06:23:50.356682 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356694 | orchestrator | 2026-02-14 06:23:50.356705 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:23:50.356715 | orchestrator | Saturday 14 February 2026 06:23:28 +0000 (0:00:00.806) 0:46:40.546 ***** 2026-02-14 06:23:50.356726 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356737 | orchestrator | 2026-02-14 06:23:50.356747 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:23:50.356758 | orchestrator | Saturday 14 February 2026 06:23:29 +0000 (0:00:00.786) 0:46:41.333 ***** 2026-02-14 06:23:50.356769 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356779 | orchestrator | 2026-02-14 06:23:50.356790 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:23:50.356801 | orchestrator | Saturday 14 February 2026 06:23:29 +0000 (0:00:00.771) 0:46:42.105 ***** 2026-02-14 06:23:50.356811 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.356822 | orchestrator | 2026-02-14 06:23:50.356833 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:23:50.356843 | orchestrator | Saturday 14 February 2026 06:23:30 +0000 (0:00:00.840) 0:46:42.946 ***** 2026-02-14 06:23:50.356863 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.356874 | orchestrator | 2026-02-14 06:23:50.356885 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:23:50.356896 | orchestrator | Saturday 14 February 2026 06:23:32 +0000 (0:00:01.542) 0:46:44.488 ***** 2026-02-14 06:23:50.356906 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.356917 | orchestrator | 2026-02-14 06:23:50.356928 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:23:50.356939 | orchestrator | Saturday 14 February 2026 06:23:34 +0000 (0:00:01.887) 0:46:46.376 ***** 2026-02-14 06:23:50.356949 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-14 06:23:50.356961 | orchestrator | 2026-02-14 06:23:50.356972 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:23:50.356983 | orchestrator | Saturday 14 February 2026 06:23:35 +0000 (0:00:01.160) 0:46:47.536 ***** 2026-02-14 06:23:50.356993 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357004 | orchestrator | 2026-02-14 06:23:50.357015 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:23:50.357026 | orchestrator | Saturday 14 February 2026 06:23:36 +0000 (0:00:01.180) 0:46:48.717 ***** 2026-02-14 06:23:50.357037 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357048 | orchestrator | 2026-02-14 06:23:50.357059 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:23:50.357070 | orchestrator | Saturday 14 February 2026 06:23:37 +0000 (0:00:01.130) 0:46:49.848 ***** 2026-02-14 06:23:50.357081 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:23:50.357097 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:23:50.357108 | orchestrator | 2026-02-14 06:23:50.357119 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:23:50.357129 | orchestrator | Saturday 14 February 2026 06:23:39 +0000 (0:00:01.848) 0:46:51.697 ***** 2026-02-14 06:23:50.357140 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.357151 | orchestrator | 2026-02-14 06:23:50.357161 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:23:50.357172 | orchestrator | Saturday 14 February 2026 06:23:40 +0000 (0:00:01.548) 0:46:53.246 ***** 2026-02-14 06:23:50.357183 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357194 | orchestrator | 2026-02-14 06:23:50.357204 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:23:50.357215 | orchestrator | Saturday 14 February 2026 06:23:42 +0000 (0:00:01.156) 0:46:54.402 ***** 2026-02-14 06:23:50.357226 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357237 | orchestrator | 2026-02-14 06:23:50.357247 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:23:50.357258 | orchestrator | Saturday 14 February 2026 06:23:43 +0000 (0:00:01.026) 0:46:55.429 ***** 2026-02-14 06:23:50.357268 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357298 | orchestrator | 2026-02-14 06:23:50.357309 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:23:50.357320 | orchestrator | Saturday 14 February 2026 06:23:43 +0000 (0:00:00.780) 0:46:56.210 ***** 2026-02-14 06:23:50.357331 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-14 06:23:50.357341 | orchestrator | 2026-02-14 06:23:50.357352 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:23:50.357363 | orchestrator | Saturday 14 February 2026 06:23:45 +0000 (0:00:01.170) 0:46:57.381 ***** 2026-02-14 06:23:50.357373 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:23:50.357384 | orchestrator | 2026-02-14 06:23:50.357395 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:23:50.357406 | orchestrator | Saturday 14 February 2026 06:23:46 +0000 (0:00:01.736) 0:46:59.118 ***** 2026-02-14 06:23:50.357424 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:23:50.357435 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:23:50.357446 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:23:50.357457 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357467 | orchestrator | 2026-02-14 06:23:50.357478 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:23:50.357489 | orchestrator | Saturday 14 February 2026 06:23:47 +0000 (0:00:01.193) 0:47:00.311 ***** 2026-02-14 06:23:50.357499 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:23:50.357510 | orchestrator | 2026-02-14 06:23:50.357521 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:23:50.357532 | orchestrator | Saturday 14 February 2026 06:23:49 +0000 (0:00:01.200) 0:47:01.512 ***** 2026-02-14 06:23:50.357549 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932305 | orchestrator | 2026-02-14 06:24:32.932431 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:24:32.932441 | orchestrator | Saturday 14 February 2026 06:23:50 +0000 (0:00:01.156) 0:47:02.669 ***** 2026-02-14 06:24:32.932446 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932452 | orchestrator | 2026-02-14 06:24:32.932457 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:24:32.932461 | orchestrator | Saturday 14 February 2026 06:23:51 +0000 (0:00:01.129) 0:47:03.798 ***** 2026-02-14 06:24:32.932466 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932471 | orchestrator | 2026-02-14 06:24:32.932475 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:24:32.932480 | orchestrator | Saturday 14 February 2026 06:23:52 +0000 (0:00:01.142) 0:47:04.940 ***** 2026-02-14 06:24:32.932484 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932489 | orchestrator | 2026-02-14 06:24:32.932494 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:24:32.932498 | orchestrator | Saturday 14 February 2026 06:23:53 +0000 (0:00:00.874) 0:47:05.815 ***** 2026-02-14 06:24:32.932503 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:24:32.932508 | orchestrator | 2026-02-14 06:24:32.932513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:24:32.932518 | orchestrator | Saturday 14 February 2026 06:23:55 +0000 (0:00:02.097) 0:47:07.912 ***** 2026-02-14 06:24:32.932522 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:24:32.932527 | orchestrator | 2026-02-14 06:24:32.932531 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:24:32.932536 | orchestrator | Saturday 14 February 2026 06:23:56 +0000 (0:00:00.780) 0:47:08.693 ***** 2026-02-14 06:24:32.932540 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-14 06:24:32.932545 | orchestrator | 2026-02-14 06:24:32.932549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:24:32.932554 | orchestrator | Saturday 14 February 2026 06:23:57 +0000 (0:00:01.299) 0:47:09.993 ***** 2026-02-14 06:24:32.932558 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932563 | orchestrator | 2026-02-14 06:24:32.932567 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:24:32.932572 | orchestrator | Saturday 14 February 2026 06:23:58 +0000 (0:00:01.175) 0:47:11.169 ***** 2026-02-14 06:24:32.932576 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932581 | orchestrator | 2026-02-14 06:24:32.932585 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:24:32.932590 | orchestrator | Saturday 14 February 2026 06:23:59 +0000 (0:00:01.143) 0:47:12.312 ***** 2026-02-14 06:24:32.932594 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932599 | orchestrator | 2026-02-14 06:24:32.932615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:24:32.932636 | orchestrator | Saturday 14 February 2026 06:24:01 +0000 (0:00:01.166) 0:47:13.479 ***** 2026-02-14 06:24:32.932641 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932646 | orchestrator | 2026-02-14 06:24:32.932650 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:24:32.932655 | orchestrator | Saturday 14 February 2026 06:24:02 +0000 (0:00:01.168) 0:47:14.647 ***** 2026-02-14 06:24:32.932659 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932664 | orchestrator | 2026-02-14 06:24:32.932668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:24:32.932672 | orchestrator | Saturday 14 February 2026 06:24:03 +0000 (0:00:01.179) 0:47:15.827 ***** 2026-02-14 06:24:32.932677 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932681 | orchestrator | 2026-02-14 06:24:32.932686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:24:32.932690 | orchestrator | Saturday 14 February 2026 06:24:04 +0000 (0:00:01.150) 0:47:16.977 ***** 2026-02-14 06:24:32.932694 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932699 | orchestrator | 2026-02-14 06:24:32.932703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:24:32.932708 | orchestrator | Saturday 14 February 2026 06:24:05 +0000 (0:00:01.157) 0:47:18.135 ***** 2026-02-14 06:24:32.932712 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932717 | orchestrator | 2026-02-14 06:24:32.932721 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:24:32.932726 | orchestrator | Saturday 14 February 2026 06:24:06 +0000 (0:00:01.135) 0:47:19.270 ***** 2026-02-14 06:24:32.932730 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:24:32.932735 | orchestrator | 2026-02-14 06:24:32.932739 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:24:32.932743 | orchestrator | Saturday 14 February 2026 06:24:07 +0000 (0:00:00.786) 0:47:20.057 ***** 2026-02-14 06:24:32.932748 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-14 06:24:32.932754 | orchestrator | 2026-02-14 06:24:32.932758 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:24:32.932763 | orchestrator | Saturday 14 February 2026 06:24:09 +0000 (0:00:01.278) 0:47:21.336 ***** 2026-02-14 06:24:32.932767 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-14 06:24:32.932772 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-14 06:24:32.932777 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-14 06:24:32.932782 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-14 06:24:32.932786 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-14 06:24:32.932790 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-14 06:24:32.932795 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-14 06:24:32.932799 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:24:32.932804 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:24:32.932819 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:24:32.932824 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:24:32.932828 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:24:32.932833 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:24:32.932837 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:24:32.932842 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-14 06:24:32.932846 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-14 06:24:32.932851 | orchestrator | 2026-02-14 06:24:32.932855 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:24:32.932860 | orchestrator | Saturday 14 February 2026 06:24:15 +0000 (0:00:06.056) 0:47:27.392 ***** 2026-02-14 06:24:32.932868 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-14 06:24:32.932873 | orchestrator | 2026-02-14 06:24:32.932878 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:24:32.932882 | orchestrator | Saturday 14 February 2026 06:24:16 +0000 (0:00:01.178) 0:47:28.570 ***** 2026-02-14 06:24:32.932886 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:24:32.932892 | orchestrator | 2026-02-14 06:24:32.932897 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:24:32.932901 | orchestrator | Saturday 14 February 2026 06:24:17 +0000 (0:00:01.485) 0:47:30.055 ***** 2026-02-14 06:24:32.932906 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:24:32.932910 | orchestrator | 2026-02-14 06:24:32.932915 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:24:32.932919 | orchestrator | Saturday 14 February 2026 06:24:19 +0000 (0:00:01.631) 0:47:31.686 ***** 2026-02-14 06:24:32.932923 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932928 | orchestrator | 2026-02-14 06:24:32.932932 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:24:32.932937 | orchestrator | Saturday 14 February 2026 06:24:20 +0000 (0:00:00.772) 0:47:32.459 ***** 2026-02-14 06:24:32.932941 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932946 | orchestrator | 2026-02-14 06:24:32.932950 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:24:32.932955 | orchestrator | Saturday 14 February 2026 06:24:20 +0000 (0:00:00.853) 0:47:33.313 ***** 2026-02-14 06:24:32.932959 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932964 | orchestrator | 2026-02-14 06:24:32.932971 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:24:32.932976 | orchestrator | Saturday 14 February 2026 06:24:21 +0000 (0:00:00.765) 0:47:34.078 ***** 2026-02-14 06:24:32.932980 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.932985 | orchestrator | 2026-02-14 06:24:32.932989 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:24:32.932994 | orchestrator | Saturday 14 February 2026 06:24:22 +0000 (0:00:00.794) 0:47:34.873 ***** 2026-02-14 06:24:32.932998 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933003 | orchestrator | 2026-02-14 06:24:32.933007 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:24:32.933012 | orchestrator | Saturday 14 February 2026 06:24:23 +0000 (0:00:00.799) 0:47:35.672 ***** 2026-02-14 06:24:32.933016 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933021 | orchestrator | 2026-02-14 06:24:32.933025 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:24:32.933030 | orchestrator | Saturday 14 February 2026 06:24:24 +0000 (0:00:00.797) 0:47:36.469 ***** 2026-02-14 06:24:32.933034 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933039 | orchestrator | 2026-02-14 06:24:32.933043 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:24:32.933048 | orchestrator | Saturday 14 February 2026 06:24:24 +0000 (0:00:00.814) 0:47:37.284 ***** 2026-02-14 06:24:32.933052 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933057 | orchestrator | 2026-02-14 06:24:32.933061 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:24:32.933066 | orchestrator | Saturday 14 February 2026 06:24:25 +0000 (0:00:00.847) 0:47:38.132 ***** 2026-02-14 06:24:32.933070 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933074 | orchestrator | 2026-02-14 06:24:32.933079 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:24:32.933087 | orchestrator | Saturday 14 February 2026 06:24:26 +0000 (0:00:00.799) 0:47:38.932 ***** 2026-02-14 06:24:32.933091 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:24:32.933096 | orchestrator | 2026-02-14 06:24:32.933100 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:24:32.933105 | orchestrator | Saturday 14 February 2026 06:24:27 +0000 (0:00:00.796) 0:47:39.729 ***** 2026-02-14 06:24:32.933109 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:24:32.933114 | orchestrator | 2026-02-14 06:24:32.933118 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:24:32.933123 | orchestrator | Saturday 14 February 2026 06:24:28 +0000 (0:00:00.834) 0:47:40.564 ***** 2026-02-14 06:24:32.933127 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:24:32.933131 | orchestrator | 2026-02-14 06:24:32.933136 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:24:32.933140 | orchestrator | Saturday 14 February 2026 06:24:32 +0000 (0:00:03.849) 0:47:44.413 ***** 2026-02-14 06:24:32.933148 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:25:14.900620 | orchestrator | 2026-02-14 06:25:14.900733 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:25:14.900750 | orchestrator | Saturday 14 February 2026 06:24:32 +0000 (0:00:00.834) 0:47:45.247 ***** 2026-02-14 06:25:14.900763 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-14 06:25:14.900777 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-14 06:25:14.900789 | orchestrator | 2026-02-14 06:25:14.900800 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:25:14.900810 | orchestrator | Saturday 14 February 2026 06:24:40 +0000 (0:00:07.131) 0:47:52.379 ***** 2026-02-14 06:25:14.900820 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.900831 | orchestrator | 2026-02-14 06:25:14.900840 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:25:14.900850 | orchestrator | Saturday 14 February 2026 06:24:40 +0000 (0:00:00.794) 0:47:53.173 ***** 2026-02-14 06:25:14.900860 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.900870 | orchestrator | 2026-02-14 06:25:14.900880 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:25:14.900891 | orchestrator | Saturday 14 February 2026 06:24:41 +0000 (0:00:00.852) 0:47:54.025 ***** 2026-02-14 06:25:14.900900 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.900910 | orchestrator | 2026-02-14 06:25:14.900919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:25:14.900929 | orchestrator | Saturday 14 February 2026 06:24:42 +0000 (0:00:00.814) 0:47:54.840 ***** 2026-02-14 06:25:14.900938 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.900948 | orchestrator | 2026-02-14 06:25:14.900958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:25:14.900967 | orchestrator | Saturday 14 February 2026 06:24:43 +0000 (0:00:00.806) 0:47:55.647 ***** 2026-02-14 06:25:14.900994 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901005 | orchestrator | 2026-02-14 06:25:14.901016 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:25:14.901027 | orchestrator | Saturday 14 February 2026 06:24:44 +0000 (0:00:00.774) 0:47:56.421 ***** 2026-02-14 06:25:14.901065 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901078 | orchestrator | 2026-02-14 06:25:14.901088 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:25:14.901099 | orchestrator | Saturday 14 February 2026 06:24:44 +0000 (0:00:00.879) 0:47:57.301 ***** 2026-02-14 06:25:14.901110 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:25:14.901121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:25:14.901132 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:25:14.901143 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901156 | orchestrator | 2026-02-14 06:25:14.901168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:25:14.901180 | orchestrator | Saturday 14 February 2026 06:24:46 +0000 (0:00:01.479) 0:47:58.781 ***** 2026-02-14 06:25:14.901193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:25:14.901205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:25:14.901217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:25:14.901229 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901241 | orchestrator | 2026-02-14 06:25:14.901253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:25:14.901266 | orchestrator | Saturday 14 February 2026 06:24:47 +0000 (0:00:01.515) 0:48:00.296 ***** 2026-02-14 06:25:14.901278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:25:14.901290 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:25:14.901302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:25:14.901314 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901325 | orchestrator | 2026-02-14 06:25:14.901335 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:25:14.901346 | orchestrator | Saturday 14 February 2026 06:24:49 +0000 (0:00:01.189) 0:48:01.485 ***** 2026-02-14 06:25:14.901357 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901367 | orchestrator | 2026-02-14 06:25:14.901378 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:25:14.901389 | orchestrator | Saturday 14 February 2026 06:24:49 +0000 (0:00:00.797) 0:48:02.284 ***** 2026-02-14 06:25:14.901400 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:25:14.901411 | orchestrator | 2026-02-14 06:25:14.901448 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:25:14.901460 | orchestrator | Saturday 14 February 2026 06:24:50 +0000 (0:00:00.978) 0:48:03.262 ***** 2026-02-14 06:25:14.901471 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901482 | orchestrator | 2026-02-14 06:25:14.901493 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-14 06:25:14.901503 | orchestrator | Saturday 14 February 2026 06:24:52 +0000 (0:00:01.415) 0:48:04.678 ***** 2026-02-14 06:25:14.901514 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901525 | orchestrator | 2026-02-14 06:25:14.901553 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-14 06:25:14.901565 | orchestrator | Saturday 14 February 2026 06:24:53 +0000 (0:00:00.808) 0:48:05.487 ***** 2026-02-14 06:25:14.901575 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:25:14.901587 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:25:14.901597 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:25:14.901608 | orchestrator | 2026-02-14 06:25:14.901619 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-14 06:25:14.901630 | orchestrator | Saturday 14 February 2026 06:24:54 +0000 (0:00:01.732) 0:48:07.219 ***** 2026-02-14 06:25:14.901640 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-14 06:25:14.901660 | orchestrator | 2026-02-14 06:25:14.901671 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-14 06:25:14.901682 | orchestrator | Saturday 14 February 2026 06:24:56 +0000 (0:00:01.228) 0:48:08.448 ***** 2026-02-14 06:25:14.901692 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901703 | orchestrator | 2026-02-14 06:25:14.901714 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-14 06:25:14.901724 | orchestrator | Saturday 14 February 2026 06:24:57 +0000 (0:00:01.182) 0:48:09.631 ***** 2026-02-14 06:25:14.901735 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.901746 | orchestrator | 2026-02-14 06:25:14.901757 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-14 06:25:14.901768 | orchestrator | Saturday 14 February 2026 06:24:58 +0000 (0:00:01.109) 0:48:10.740 ***** 2026-02-14 06:25:14.901778 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901789 | orchestrator | 2026-02-14 06:25:14.901799 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-14 06:25:14.901810 | orchestrator | Saturday 14 February 2026 06:24:59 +0000 (0:00:01.486) 0:48:12.227 ***** 2026-02-14 06:25:14.901821 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.901831 | orchestrator | 2026-02-14 06:25:14.901842 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-14 06:25:14.901853 | orchestrator | Saturday 14 February 2026 06:25:01 +0000 (0:00:01.166) 0:48:13.393 ***** 2026-02-14 06:25:14.901863 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-14 06:25:14.901874 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-14 06:25:14.901885 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-14 06:25:14.901901 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-14 06:25:14.901912 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-14 06:25:14.901929 | orchestrator | 2026-02-14 06:25:14.901947 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-14 06:25:14.901965 | orchestrator | Saturday 14 February 2026 06:25:03 +0000 (0:00:02.497) 0:48:15.891 ***** 2026-02-14 06:25:14.901983 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.902001 | orchestrator | 2026-02-14 06:25:14.902086 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-14 06:25:14.902111 | orchestrator | Saturday 14 February 2026 06:25:04 +0000 (0:00:00.777) 0:48:16.668 ***** 2026-02-14 06:25:14.902131 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-14 06:25:14.902150 | orchestrator | 2026-02-14 06:25:14.902169 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-14 06:25:14.902181 | orchestrator | Saturday 14 February 2026 06:25:05 +0000 (0:00:01.236) 0:48:17.904 ***** 2026-02-14 06:25:14.902192 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-14 06:25:14.902203 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-14 06:25:14.902213 | orchestrator | 2026-02-14 06:25:14.902304 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-14 06:25:14.902315 | orchestrator | Saturday 14 February 2026 06:25:07 +0000 (0:00:01.864) 0:48:19.768 ***** 2026-02-14 06:25:14.902326 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:25:14.902337 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:25:14.902348 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:25:14.902359 | orchestrator | 2026-02-14 06:25:14.902369 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:25:14.902380 | orchestrator | Saturday 14 February 2026 06:25:10 +0000 (0:00:03.350) 0:48:23.119 ***** 2026-02-14 06:25:14.902391 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-14 06:25:14.902412 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:25:14.902454 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:25:14.902466 | orchestrator | 2026-02-14 06:25:14.902477 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-14 06:25:14.902488 | orchestrator | Saturday 14 February 2026 06:25:12 +0000 (0:00:01.614) 0:48:24.734 ***** 2026-02-14 06:25:14.902499 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.902510 | orchestrator | 2026-02-14 06:25:14.902520 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-14 06:25:14.902531 | orchestrator | Saturday 14 February 2026 06:25:13 +0000 (0:00:00.896) 0:48:25.630 ***** 2026-02-14 06:25:14.902542 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.902553 | orchestrator | 2026-02-14 06:25:14.902564 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-14 06:25:14.902574 | orchestrator | Saturday 14 February 2026 06:25:14 +0000 (0:00:00.791) 0:48:26.422 ***** 2026-02-14 06:25:14.902585 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:25:14.902596 | orchestrator | 2026-02-14 06:25:14.902620 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-14 06:27:34.888979 | orchestrator | Saturday 14 February 2026 06:25:14 +0000 (0:00:00.790) 0:48:27.213 ***** 2026-02-14 06:27:34.889101 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-14 06:27:34.889118 | orchestrator | 2026-02-14 06:27:34.889132 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-14 06:27:34.889143 | orchestrator | Saturday 14 February 2026 06:25:16 +0000 (0:00:01.299) 0:48:28.513 ***** 2026-02-14 06:27:34.889155 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.889167 | orchestrator | 2026-02-14 06:27:34.889179 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-14 06:27:34.889190 | orchestrator | Saturday 14 February 2026 06:25:17 +0000 (0:00:01.472) 0:48:29.985 ***** 2026-02-14 06:27:34.889201 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.889212 | orchestrator | 2026-02-14 06:27:34.889222 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-14 06:27:34.889233 | orchestrator | Saturday 14 February 2026 06:25:21 +0000 (0:00:03.377) 0:48:33.363 ***** 2026-02-14 06:27:34.889244 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-14 06:27:34.889255 | orchestrator | 2026-02-14 06:27:34.889266 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-14 06:27:34.889277 | orchestrator | Saturday 14 February 2026 06:25:22 +0000 (0:00:01.120) 0:48:34.483 ***** 2026-02-14 06:27:34.889287 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.889298 | orchestrator | 2026-02-14 06:27:34.889309 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-14 06:27:34.889320 | orchestrator | Saturday 14 February 2026 06:25:24 +0000 (0:00:01.965) 0:48:36.449 ***** 2026-02-14 06:27:34.889331 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.889342 | orchestrator | 2026-02-14 06:27:34.889357 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-14 06:27:34.889368 | orchestrator | Saturday 14 February 2026 06:25:26 +0000 (0:00:01.975) 0:48:38.424 ***** 2026-02-14 06:27:34.889378 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.889389 | orchestrator | 2026-02-14 06:27:34.889400 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-14 06:27:34.889411 | orchestrator | Saturday 14 February 2026 06:25:28 +0000 (0:00:02.198) 0:48:40.623 ***** 2026-02-14 06:27:34.889422 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.889434 | orchestrator | 2026-02-14 06:27:34.889445 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-14 06:27:34.889455 | orchestrator | Saturday 14 February 2026 06:25:29 +0000 (0:00:01.188) 0:48:41.811 ***** 2026-02-14 06:27:34.889466 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.889477 | orchestrator | 2026-02-14 06:27:34.889505 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-14 06:27:34.889548 | orchestrator | Saturday 14 February 2026 06:25:30 +0000 (0:00:01.249) 0:48:43.061 ***** 2026-02-14 06:27:34.889569 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-14 06:27:34.889588 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-14 06:27:34.889606 | orchestrator | 2026-02-14 06:27:34.889651 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-14 06:27:34.889670 | orchestrator | Saturday 14 February 2026 06:25:32 +0000 (0:00:01.828) 0:48:44.890 ***** 2026-02-14 06:27:34.889687 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-14 06:27:34.889703 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-14 06:27:34.889721 | orchestrator | 2026-02-14 06:27:34.889739 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-14 06:27:34.889757 | orchestrator | Saturday 14 February 2026 06:25:35 +0000 (0:00:02.862) 0:48:47.752 ***** 2026-02-14 06:27:34.889774 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-14 06:27:34.889792 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-14 06:27:34.889812 | orchestrator | 2026-02-14 06:27:34.889832 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-14 06:27:34.889850 | orchestrator | Saturday 14 February 2026 06:25:39 +0000 (0:00:04.186) 0:48:51.939 ***** 2026-02-14 06:27:34.889868 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.889885 | orchestrator | 2026-02-14 06:27:34.889900 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-14 06:27:34.889917 | orchestrator | Saturday 14 February 2026 06:25:41 +0000 (0:00:01.491) 0:48:53.430 ***** 2026-02-14 06:27:34.889935 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-14 06:27:34.889955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:27:34.889975 | orchestrator | 2026-02-14 06:27:34.889995 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-14 06:27:34.890013 | orchestrator | Saturday 14 February 2026 06:25:53 +0000 (0:00:12.886) 0:49:06.316 ***** 2026-02-14 06:27:34.890093 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.890104 | orchestrator | 2026-02-14 06:27:34.890115 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-14 06:27:34.890126 | orchestrator | Saturday 14 February 2026 06:25:54 +0000 (0:00:00.894) 0:49:07.211 ***** 2026-02-14 06:27:34.890137 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.890148 | orchestrator | 2026-02-14 06:27:34.890158 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-14 06:27:34.890169 | orchestrator | Saturday 14 February 2026 06:25:55 +0000 (0:00:00.775) 0:49:07.986 ***** 2026-02-14 06:27:34.890180 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:27:34.890191 | orchestrator | 2026-02-14 06:27:34.890201 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-14 06:27:34.890212 | orchestrator | Saturday 14 February 2026 06:25:56 +0000 (0:00:00.767) 0:49:08.754 ***** 2026-02-14 06:27:34.890223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:27:34.890234 | orchestrator | 2026-02-14 06:27:34.890245 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-14 06:27:34.890255 | orchestrator | 2026-02-14 06:27:34.890289 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:27:34.890301 | orchestrator | Saturday 14 February 2026 06:25:59 +0000 (0:00:02.743) 0:49:11.497 ***** 2026-02-14 06:27:34.890350 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:27:34.890361 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:27:34.890372 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.890383 | orchestrator | 2026-02-14 06:27:34.890394 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:27:34.890405 | orchestrator | Saturday 14 February 2026 06:26:00 +0000 (0:00:01.684) 0:49:13.182 ***** 2026-02-14 06:27:34.890415 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:27:34.890439 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:27:34.890450 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:27:34.890461 | orchestrator | 2026-02-14 06:27:34.890472 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-14 06:27:34.890483 | orchestrator | Saturday 14 February 2026 06:26:02 +0000 (0:00:01.655) 0:49:14.837 ***** 2026-02-14 06:27:34.890493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-14 06:27:34.890504 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-14 06:27:34.890516 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-14 06:27:34.890527 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-14 06:27:34.890539 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-14 06:27:34.890550 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-14 06:27:34.890561 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-14 06:27:34.890572 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-14 06:27:34.890582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-14 06:27:34.890593 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-14 06:27:34.890662 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-14 06:27:34.890685 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-14 06:27:34.890704 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-14 06:27:34.890722 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-14 06:27:34.890740 | orchestrator | 2026-02-14 06:27:34.890759 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-14 06:27:34.890779 | orchestrator | Saturday 14 February 2026 06:27:18 +0000 (0:01:15.927) 0:50:30.765 ***** 2026-02-14 06:27:34.890798 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-14 06:27:34.890818 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-14 06:27:34.890837 | orchestrator | 2026-02-14 06:27:34.890856 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-14 06:27:34.890876 | orchestrator | Saturday 14 February 2026 06:27:24 +0000 (0:00:05.682) 0:50:36.448 ***** 2026-02-14 06:27:34.890894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:27:34.890912 | orchestrator | 2026-02-14 06:27:34.890930 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-14 06:27:34.890949 | orchestrator | 2026-02-14 06:27:34.890969 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:27:34.890987 | orchestrator | Saturday 14 February 2026 06:27:27 +0000 (0:00:03.181) 0:50:39.629 ***** 2026-02-14 06:27:34.891007 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-14 06:27:34.891026 | orchestrator | 2026-02-14 06:27:34.891047 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:27:34.891068 | orchestrator | Saturday 14 February 2026 06:27:28 +0000 (0:00:01.133) 0:50:40.763 ***** 2026-02-14 06:27:34.891089 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:27:34.891109 | orchestrator | 2026-02-14 06:27:34.891129 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:27:34.891158 | orchestrator | Saturday 14 February 2026 06:27:29 +0000 (0:00:01.489) 0:50:42.253 ***** 2026-02-14 06:27:34.891170 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:27:34.891181 | orchestrator | 2026-02-14 06:27:34.891191 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:27:34.891202 | orchestrator | Saturday 14 February 2026 06:27:31 +0000 (0:00:01.207) 0:50:43.461 ***** 2026-02-14 06:27:34.891213 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:27:34.891224 | orchestrator | 2026-02-14 06:27:34.891234 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:27:34.891245 | orchestrator | Saturday 14 February 2026 06:27:32 +0000 (0:00:01.435) 0:50:44.896 ***** 2026-02-14 06:27:34.891256 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:27:34.891266 | orchestrator | 2026-02-14 06:27:34.891298 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:27:34.891309 | orchestrator | Saturday 14 February 2026 06:27:33 +0000 (0:00:01.147) 0:50:46.044 ***** 2026-02-14 06:27:34.891320 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:27:34.891331 | orchestrator | 2026-02-14 06:27:34.891342 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:27:34.891367 | orchestrator | Saturday 14 February 2026 06:27:34 +0000 (0:00:01.153) 0:50:47.198 ***** 2026-02-14 06:28:00.586190 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.586309 | orchestrator | 2026-02-14 06:28:00.586326 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:28:00.586339 | orchestrator | Saturday 14 February 2026 06:27:36 +0000 (0:00:01.169) 0:50:48.368 ***** 2026-02-14 06:28:00.586352 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.586364 | orchestrator | 2026-02-14 06:28:00.586375 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:28:00.586386 | orchestrator | Saturday 14 February 2026 06:27:37 +0000 (0:00:01.168) 0:50:49.536 ***** 2026-02-14 06:28:00.586397 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.586408 | orchestrator | 2026-02-14 06:28:00.586419 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:28:00.586430 | orchestrator | Saturday 14 February 2026 06:27:38 +0000 (0:00:01.145) 0:50:50.682 ***** 2026-02-14 06:28:00.586441 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:28:00.586452 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:28:00.586463 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:28:00.586474 | orchestrator | 2026-02-14 06:28:00.586485 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:28:00.586495 | orchestrator | Saturday 14 February 2026 06:27:40 +0000 (0:00:01.809) 0:50:52.492 ***** 2026-02-14 06:28:00.586506 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.586517 | orchestrator | 2026-02-14 06:28:00.586528 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:28:00.586538 | orchestrator | Saturday 14 February 2026 06:27:41 +0000 (0:00:01.366) 0:50:53.858 ***** 2026-02-14 06:28:00.586549 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:28:00.586560 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:28:00.586571 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:28:00.586582 | orchestrator | 2026-02-14 06:28:00.586592 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:28:00.586603 | orchestrator | Saturday 14 February 2026 06:27:44 +0000 (0:00:03.228) 0:50:57.087 ***** 2026-02-14 06:28:00.586615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 06:28:00.586626 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 06:28:00.586637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 06:28:00.586703 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.586745 | orchestrator | 2026-02-14 06:28:00.586759 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:28:00.586772 | orchestrator | Saturday 14 February 2026 06:27:46 +0000 (0:00:01.502) 0:50:58.589 ***** 2026-02-14 06:28:00.586786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586802 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586815 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586829 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.586842 | orchestrator | 2026-02-14 06:28:00.586855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:28:00.586867 | orchestrator | Saturday 14 February 2026 06:27:48 +0000 (0:00:02.138) 0:51:00.727 ***** 2026-02-14 06:28:00.586881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:00.586944 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.586957 | orchestrator | 2026-02-14 06:28:00.586970 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:28:00.586983 | orchestrator | Saturday 14 February 2026 06:27:49 +0000 (0:00:01.263) 0:51:01.991 ***** 2026-02-14 06:28:00.586997 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:27:42.096921', 'end': '2026-02-14 06:27:42.134368', 'delta': '0:00:00.037447', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:28:00.587021 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:27:42.659768', 'end': '2026-02-14 06:27:42.716022', 'delta': '0:00:00.056254', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:28:00.587043 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:27:43.580924', 'end': '2026-02-14 06:27:43.627019', 'delta': '0:00:00.046095', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:28:00.587055 | orchestrator | 2026-02-14 06:28:00.587066 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:28:00.587077 | orchestrator | Saturday 14 February 2026 06:27:51 +0000 (0:00:01.542) 0:51:03.534 ***** 2026-02-14 06:28:00.587088 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.587099 | orchestrator | 2026-02-14 06:28:00.587109 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:28:00.587120 | orchestrator | Saturday 14 February 2026 06:27:52 +0000 (0:00:01.284) 0:51:04.818 ***** 2026-02-14 06:28:00.587131 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.587142 | orchestrator | 2026-02-14 06:28:00.587153 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:28:00.587164 | orchestrator | Saturday 14 February 2026 06:27:53 +0000 (0:00:01.259) 0:51:06.078 ***** 2026-02-14 06:28:00.587175 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.587186 | orchestrator | 2026-02-14 06:28:00.587196 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:28:00.587207 | orchestrator | Saturday 14 February 2026 06:27:54 +0000 (0:00:01.232) 0:51:07.311 ***** 2026-02-14 06:28:00.587218 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.587229 | orchestrator | 2026-02-14 06:28:00.587239 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:28:00.587250 | orchestrator | Saturday 14 February 2026 06:27:56 +0000 (0:00:01.994) 0:51:09.306 ***** 2026-02-14 06:28:00.587261 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:00.587272 | orchestrator | 2026-02-14 06:28:00.587282 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:28:00.587293 | orchestrator | Saturday 14 February 2026 06:27:58 +0000 (0:00:01.182) 0:51:10.489 ***** 2026-02-14 06:28:00.587304 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.587315 | orchestrator | 2026-02-14 06:28:00.587326 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:28:00.587337 | orchestrator | Saturday 14 February 2026 06:27:59 +0000 (0:00:01.121) 0:51:11.611 ***** 2026-02-14 06:28:00.587348 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:00.587358 | orchestrator | 2026-02-14 06:28:00.587369 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:28:00.587387 | orchestrator | Saturday 14 February 2026 06:28:00 +0000 (0:00:01.286) 0:51:12.897 ***** 2026-02-14 06:28:11.520793 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.521728 | orchestrator | 2026-02-14 06:28:11.521769 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:28:11.521784 | orchestrator | Saturday 14 February 2026 06:28:01 +0000 (0:00:01.146) 0:51:14.043 ***** 2026-02-14 06:28:11.521818 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.521830 | orchestrator | 2026-02-14 06:28:11.521841 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:28:11.521852 | orchestrator | Saturday 14 February 2026 06:28:02 +0000 (0:00:01.173) 0:51:15.217 ***** 2026-02-14 06:28:11.521863 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.521874 | orchestrator | 2026-02-14 06:28:11.521885 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:28:11.521896 | orchestrator | Saturday 14 February 2026 06:28:04 +0000 (0:00:01.149) 0:51:16.367 ***** 2026-02-14 06:28:11.521906 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.521917 | orchestrator | 2026-02-14 06:28:11.521928 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:28:11.521939 | orchestrator | Saturday 14 February 2026 06:28:05 +0000 (0:00:01.138) 0:51:17.506 ***** 2026-02-14 06:28:11.521949 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.521960 | orchestrator | 2026-02-14 06:28:11.521971 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:28:11.521982 | orchestrator | Saturday 14 February 2026 06:28:06 +0000 (0:00:01.150) 0:51:18.656 ***** 2026-02-14 06:28:11.521992 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.522003 | orchestrator | 2026-02-14 06:28:11.522014 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:28:11.522075 | orchestrator | Saturday 14 February 2026 06:28:07 +0000 (0:00:01.170) 0:51:19.826 ***** 2026-02-14 06:28:11.522086 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.522097 | orchestrator | 2026-02-14 06:28:11.522107 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:28:11.522118 | orchestrator | Saturday 14 February 2026 06:28:08 +0000 (0:00:01.278) 0:51:21.105 ***** 2026-02-14 06:28:11.522147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:28:11.522204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:28:11.522296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:28:11.522326 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:11.522337 | orchestrator | 2026-02-14 06:28:11.522348 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:28:11.522359 | orchestrator | Saturday 14 February 2026 06:28:10 +0000 (0:00:01.342) 0:51:22.447 ***** 2026-02-14 06:28:11.522380 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782343 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782454 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782494 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782520 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782599 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d6eeb05', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1', 'scsi-SQEMU_QEMU_HARDDISK_7d6eeb05-e83d-4317-802b-0715782d7f16-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782614 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782626 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:28:15.782645 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:28:15.782659 | orchestrator | 2026-02-14 06:28:15.782699 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:28:15.782712 | orchestrator | Saturday 14 February 2026 06:28:11 +0000 (0:00:01.390) 0:51:23.837 ***** 2026-02-14 06:28:15.782723 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:15.782735 | orchestrator | 2026-02-14 06:28:15.782746 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:28:15.782757 | orchestrator | Saturday 14 February 2026 06:28:13 +0000 (0:00:01.576) 0:51:25.414 ***** 2026-02-14 06:28:15.782768 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:15.782779 | orchestrator | 2026-02-14 06:28:15.782790 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:28:15.782801 | orchestrator | Saturday 14 February 2026 06:28:14 +0000 (0:00:01.167) 0:51:26.581 ***** 2026-02-14 06:28:15.782811 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:28:15.782822 | orchestrator | 2026-02-14 06:28:15.782833 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:28:15.782851 | orchestrator | Saturday 14 February 2026 06:28:15 +0000 (0:00:01.515) 0:51:28.097 ***** 2026-02-14 06:29:09.875444 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:29:09.875564 | orchestrator | 2026-02-14 06:29:09.875581 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:29:09.875594 | orchestrator | Saturday 14 February 2026 06:28:16 +0000 (0:00:01.129) 0:51:29.227 ***** 2026-02-14 06:29:09.875605 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:29:09.875616 | orchestrator | 2026-02-14 06:29:09.875627 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:29:09.875638 | orchestrator | Saturday 14 February 2026 06:28:18 +0000 (0:00:01.244) 0:51:30.472 ***** 2026-02-14 06:29:09.875649 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:29:09.875660 | orchestrator | 2026-02-14 06:29:09.875671 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:29:09.875682 | orchestrator | Saturday 14 February 2026 06:28:19 +0000 (0:00:01.188) 0:51:31.660 ***** 2026-02-14 06:29:09.875693 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:29:09.875704 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-14 06:29:09.875715 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-14 06:29:09.875726 | orchestrator | 2026-02-14 06:29:09.875736 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:29:09.875814 | orchestrator | Saturday 14 February 2026 06:28:21 +0000 (0:00:02.032) 0:51:33.692 ***** 2026-02-14 06:29:09.875825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-14 06:29:09.875837 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-14 06:29:09.875847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-14 06:29:09.875858 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:29:09.875869 | orchestrator | 2026-02-14 06:29:09.875879 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:29:09.875890 | orchestrator | Saturday 14 February 2026 06:28:22 +0000 (0:00:01.221) 0:51:34.913 ***** 2026-02-14 06:29:09.875901 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:29:09.875912 | orchestrator | 2026-02-14 06:29:09.875923 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:29:09.875960 | orchestrator | Saturday 14 February 2026 06:28:23 +0000 (0:00:01.161) 0:51:36.075 ***** 2026-02-14 06:29:09.875973 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:29:09.876001 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:29:09.876015 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:29:09.876028 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:29:09.876040 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:29:09.876053 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:29:09.876065 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:29:09.876077 | orchestrator | 2026-02-14 06:29:09.876090 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:29:09.876103 | orchestrator | Saturday 14 February 2026 06:28:26 +0000 (0:00:02.284) 0:51:38.359 ***** 2026-02-14 06:29:09.876116 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-14 06:29:09.876129 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:29:09.876141 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:29:09.876154 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:29:09.876166 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:29:09.876176 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:29:09.876187 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:29:09.876198 | orchestrator | 2026-02-14 06:29:09.876209 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-14 06:29:09.876220 | orchestrator | Saturday 14 February 2026 06:28:29 +0000 (0:00:03.130) 0:51:41.490 ***** 2026-02-14 06:29:09.876231 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:29:09.876242 | orchestrator | 2026-02-14 06:29:09.876252 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-14 06:29:09.876263 | orchestrator | Saturday 14 February 2026 06:28:32 +0000 (0:00:03.390) 0:51:44.880 ***** 2026-02-14 06:29:09.876274 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:29:09.876285 | orchestrator | 2026-02-14 06:29:09.876296 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-14 06:29:09.876307 | orchestrator | Saturday 14 February 2026 06:28:35 +0000 (0:00:02.973) 0:51:47.854 ***** 2026-02-14 06:29:09.876317 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:29:09.876328 | orchestrator | 2026-02-14 06:29:09.876339 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-14 06:29:09.876350 | orchestrator | Saturday 14 February 2026 06:28:37 +0000 (0:00:02.299) 0:51:50.154 ***** 2026-02-14 06:29:09.876383 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4761', 'value': {'gid': 4761, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.14:6817/3698779183', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 3698779183}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 3698779183}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-14 06:29:09.876399 | orchestrator | 2026-02-14 06:29:09.876422 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-14 06:29:09.876433 | orchestrator | Saturday 14 February 2026 06:28:39 +0000 (0:00:01.287) 0:51:51.442 ***** 2026-02-14 06:29:09.876443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-14 06:29:09.876454 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-02-14 06:29:09.876465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-14 06:29:09.876476 | orchestrator | 2026-02-14 06:29:09.876487 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-14 06:29:09.876498 | orchestrator | Saturday 14 February 2026 06:28:40 +0000 (0:00:01.627) 0:51:53.069 ***** 2026-02-14 06:29:09.876509 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-02-14 06:29:09.876529 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-14 06:29:09.876549 | orchestrator | 2026-02-14 06:29:09.876569 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-14 06:29:09.876590 | orchestrator | Saturday 14 February 2026 06:28:42 +0000 (0:00:01.603) 0:51:54.673 ***** 2026-02-14 06:29:09.876609 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:29:09.876630 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:29:09.876651 | orchestrator | 2026-02-14 06:29:09.876664 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-14 06:29:09.876674 | orchestrator | Saturday 14 February 2026 06:28:51 +0000 (0:00:08.738) 0:52:03.411 ***** 2026-02-14 06:29:09.876692 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:29:09.876703 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:29:09.876714 | orchestrator | 2026-02-14 06:29:09.876724 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-14 06:29:09.876735 | orchestrator | Saturday 14 February 2026 06:28:54 +0000 (0:00:03.749) 0:52:07.161 ***** 2026-02-14 06:29:09.876826 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:29:09.876838 | orchestrator | 2026-02-14 06:29:09.876849 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-14 06:29:09.876860 | orchestrator | Saturday 14 February 2026 06:28:57 +0000 (0:00:02.176) 0:52:09.337 ***** 2026-02-14 06:29:09.876877 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:29:09.876895 | orchestrator | 2026-02-14 06:29:09.876913 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-14 06:29:09.876932 | orchestrator | 2026-02-14 06:29:09.876951 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:29:09.876969 | orchestrator | Saturday 14 February 2026 06:28:58 +0000 (0:00:01.597) 0:52:10.934 ***** 2026-02-14 06:29:09.876989 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-14 06:29:09.877006 | orchestrator | 2026-02-14 06:29:09.877026 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:29:09.877043 | orchestrator | Saturday 14 February 2026 06:28:59 +0000 (0:00:01.314) 0:52:12.249 ***** 2026-02-14 06:29:09.877062 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877074 | orchestrator | 2026-02-14 06:29:09.877085 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:29:09.877095 | orchestrator | Saturday 14 February 2026 06:29:01 +0000 (0:00:01.522) 0:52:13.771 ***** 2026-02-14 06:29:09.877106 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877117 | orchestrator | 2026-02-14 06:29:09.877128 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:29:09.877138 | orchestrator | Saturday 14 February 2026 06:29:02 +0000 (0:00:01.124) 0:52:14.896 ***** 2026-02-14 06:29:09.877149 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877160 | orchestrator | 2026-02-14 06:29:09.877170 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:29:09.877191 | orchestrator | Saturday 14 February 2026 06:29:04 +0000 (0:00:01.512) 0:52:16.409 ***** 2026-02-14 06:29:09.877202 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877217 | orchestrator | 2026-02-14 06:29:09.877235 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:29:09.877254 | orchestrator | Saturday 14 February 2026 06:29:05 +0000 (0:00:01.149) 0:52:17.558 ***** 2026-02-14 06:29:09.877272 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877291 | orchestrator | 2026-02-14 06:29:09.877310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:29:09.877322 | orchestrator | Saturday 14 February 2026 06:29:06 +0000 (0:00:01.136) 0:52:18.695 ***** 2026-02-14 06:29:09.877333 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877344 | orchestrator | 2026-02-14 06:29:09.877355 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:29:09.877365 | orchestrator | Saturday 14 February 2026 06:29:07 +0000 (0:00:01.173) 0:52:19.869 ***** 2026-02-14 06:29:09.877376 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:09.877387 | orchestrator | 2026-02-14 06:29:09.877398 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:29:09.877408 | orchestrator | Saturday 14 February 2026 06:29:08 +0000 (0:00:01.181) 0:52:21.051 ***** 2026-02-14 06:29:09.877419 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:09.877430 | orchestrator | 2026-02-14 06:29:09.877450 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:29:35.324857 | orchestrator | Saturday 14 February 2026 06:29:09 +0000 (0:00:01.134) 0:52:22.185 ***** 2026-02-14 06:29:35.324966 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:29:35.324981 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:29:35.324991 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:29:35.325001 | orchestrator | 2026-02-14 06:29:35.325011 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:29:35.325021 | orchestrator | Saturday 14 February 2026 06:29:11 +0000 (0:00:02.072) 0:52:24.259 ***** 2026-02-14 06:29:35.325031 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:35.325042 | orchestrator | 2026-02-14 06:29:35.325052 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:29:35.325062 | orchestrator | Saturday 14 February 2026 06:29:13 +0000 (0:00:01.278) 0:52:25.537 ***** 2026-02-14 06:29:35.325071 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:29:35.325081 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:29:35.325090 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:29:35.325100 | orchestrator | 2026-02-14 06:29:35.325109 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:29:35.325119 | orchestrator | Saturday 14 February 2026 06:29:16 +0000 (0:00:03.286) 0:52:28.824 ***** 2026-02-14 06:29:35.325128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:29:35.325138 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:29:35.325148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:29:35.325158 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325167 | orchestrator | 2026-02-14 06:29:35.325177 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:29:35.325186 | orchestrator | Saturday 14 February 2026 06:29:18 +0000 (0:00:01.875) 0:52:30.699 ***** 2026-02-14 06:29:35.325211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325263 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325273 | orchestrator | 2026-02-14 06:29:35.325282 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:29:35.325293 | orchestrator | Saturday 14 February 2026 06:29:19 +0000 (0:00:01.605) 0:52:32.305 ***** 2026-02-14 06:29:35.325305 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:35.325337 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325347 | orchestrator | 2026-02-14 06:29:35.325357 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:29:35.325368 | orchestrator | Saturday 14 February 2026 06:29:21 +0000 (0:00:01.182) 0:52:33.487 ***** 2026-02-14 06:29:35.325397 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:29:14.103351', 'end': '2026-02-14 06:29:14.155400', 'delta': '0:00:00.052049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:29:35.325412 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:29:14.691794', 'end': '2026-02-14 06:29:14.747177', 'delta': '0:00:00.055383', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:29:35.325436 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:29:15.297375', 'end': '2026-02-14 06:29:15.355166', 'delta': '0:00:00.057791', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:29:35.325448 | orchestrator | 2026-02-14 06:29:35.325459 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:29:35.325470 | orchestrator | Saturday 14 February 2026 06:29:22 +0000 (0:00:01.269) 0:52:34.756 ***** 2026-02-14 06:29:35.325481 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:35.325493 | orchestrator | 2026-02-14 06:29:35.325504 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:29:35.325515 | orchestrator | Saturday 14 February 2026 06:29:23 +0000 (0:00:01.269) 0:52:36.026 ***** 2026-02-14 06:29:35.325525 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325536 | orchestrator | 2026-02-14 06:29:35.325547 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:29:35.325558 | orchestrator | Saturday 14 February 2026 06:29:25 +0000 (0:00:01.317) 0:52:37.343 ***** 2026-02-14 06:29:35.325569 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:35.325580 | orchestrator | 2026-02-14 06:29:35.325591 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:29:35.325602 | orchestrator | Saturday 14 February 2026 06:29:26 +0000 (0:00:01.180) 0:52:38.524 ***** 2026-02-14 06:29:35.325612 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:29:35.325624 | orchestrator | 2026-02-14 06:29:35.325634 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:29:35.325645 | orchestrator | Saturday 14 February 2026 06:29:28 +0000 (0:00:01.978) 0:52:40.503 ***** 2026-02-14 06:29:35.325656 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:35.325667 | orchestrator | 2026-02-14 06:29:35.325678 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:29:35.325689 | orchestrator | Saturday 14 February 2026 06:29:29 +0000 (0:00:01.230) 0:52:41.733 ***** 2026-02-14 06:29:35.325700 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325711 | orchestrator | 2026-02-14 06:29:35.325722 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:29:35.325732 | orchestrator | Saturday 14 February 2026 06:29:30 +0000 (0:00:01.174) 0:52:42.907 ***** 2026-02-14 06:29:35.325742 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325751 | orchestrator | 2026-02-14 06:29:35.325761 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:29:35.325805 | orchestrator | Saturday 14 February 2026 06:29:31 +0000 (0:00:01.219) 0:52:44.127 ***** 2026-02-14 06:29:35.325816 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325825 | orchestrator | 2026-02-14 06:29:35.325835 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:29:35.325844 | orchestrator | Saturday 14 February 2026 06:29:32 +0000 (0:00:01.126) 0:52:45.254 ***** 2026-02-14 06:29:35.325854 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:35.325863 | orchestrator | 2026-02-14 06:29:35.325872 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:29:35.325882 | orchestrator | Saturday 14 February 2026 06:29:34 +0000 (0:00:01.139) 0:52:46.394 ***** 2026-02-14 06:29:35.325898 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:40.333246 | orchestrator | 2026-02-14 06:29:40.334157 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:29:40.334250 | orchestrator | Saturday 14 February 2026 06:29:35 +0000 (0:00:01.245) 0:52:47.639 ***** 2026-02-14 06:29:40.334263 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:40.334272 | orchestrator | 2026-02-14 06:29:40.334279 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:29:40.334287 | orchestrator | Saturday 14 February 2026 06:29:36 +0000 (0:00:01.174) 0:52:48.813 ***** 2026-02-14 06:29:40.334295 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:40.334304 | orchestrator | 2026-02-14 06:29:40.334312 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:29:40.334320 | orchestrator | Saturday 14 February 2026 06:29:37 +0000 (0:00:01.236) 0:52:50.050 ***** 2026-02-14 06:29:40.334327 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:40.334335 | orchestrator | 2026-02-14 06:29:40.334343 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:29:40.334352 | orchestrator | Saturday 14 February 2026 06:29:38 +0000 (0:00:01.145) 0:52:51.196 ***** 2026-02-14 06:29:40.334359 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:29:40.334367 | orchestrator | 2026-02-14 06:29:40.334375 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:29:40.334383 | orchestrator | Saturday 14 February 2026 06:29:40 +0000 (0:00:01.195) 0:52:52.391 ***** 2026-02-14 06:29:40.334393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}})  2026-02-14 06:29:40.334429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:29:40.334438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}})  2026-02-14 06:29:40.334454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:29:40.334500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:40.334526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}})  2026-02-14 06:29:40.334535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}})  2026-02-14 06:29:40.334553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:41.744941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:29:41.745053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:41.745071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:29:41.745085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:29:41.745119 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:29:41.745133 | orchestrator | 2026-02-14 06:29:41.745144 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:29:41.745156 | orchestrator | Saturday 14 February 2026 06:29:41 +0000 (0:00:01.429) 0:52:53.821 ***** 2026-02-14 06:29:41.745191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:41.745206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:41.745228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:41.745241 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:41.745262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:41.745281 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:29:42.933460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:30:18.039801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:30:18.039990 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040009 | orchestrator | 2026-02-14 06:30:18.040022 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:30:18.040035 | orchestrator | Saturday 14 February 2026 06:29:42 +0000 (0:00:01.427) 0:52:55.248 ***** 2026-02-14 06:30:18.040046 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.040058 | orchestrator | 2026-02-14 06:30:18.040069 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:30:18.040080 | orchestrator | Saturday 14 February 2026 06:29:44 +0000 (0:00:01.488) 0:52:56.737 ***** 2026-02-14 06:30:18.040091 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.040101 | orchestrator | 2026-02-14 06:30:18.040112 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:30:18.040123 | orchestrator | Saturday 14 February 2026 06:29:45 +0000 (0:00:01.169) 0:52:57.907 ***** 2026-02-14 06:30:18.040161 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.040173 | orchestrator | 2026-02-14 06:30:18.040184 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:30:18.040195 | orchestrator | Saturday 14 February 2026 06:29:47 +0000 (0:00:01.452) 0:52:59.360 ***** 2026-02-14 06:30:18.040205 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040216 | orchestrator | 2026-02-14 06:30:18.040227 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:30:18.040238 | orchestrator | Saturday 14 February 2026 06:29:48 +0000 (0:00:01.148) 0:53:00.508 ***** 2026-02-14 06:30:18.040248 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040259 | orchestrator | 2026-02-14 06:30:18.040270 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:30:18.040280 | orchestrator | Saturday 14 February 2026 06:29:49 +0000 (0:00:01.284) 0:53:01.793 ***** 2026-02-14 06:30:18.040291 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040302 | orchestrator | 2026-02-14 06:30:18.040312 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:30:18.040323 | orchestrator | Saturday 14 February 2026 06:29:50 +0000 (0:00:01.240) 0:53:03.033 ***** 2026-02-14 06:30:18.040334 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 06:30:18.040345 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 06:30:18.040356 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 06:30:18.040367 | orchestrator | 2026-02-14 06:30:18.040377 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:30:18.040388 | orchestrator | Saturday 14 February 2026 06:29:52 +0000 (0:00:02.264) 0:53:05.298 ***** 2026-02-14 06:30:18.040398 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:30:18.040410 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:30:18.040420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:30:18.040431 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040441 | orchestrator | 2026-02-14 06:30:18.040452 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:30:18.040463 | orchestrator | Saturday 14 February 2026 06:29:54 +0000 (0:00:01.196) 0:53:06.495 ***** 2026-02-14 06:30:18.040473 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-14 06:30:18.040485 | orchestrator | 2026-02-14 06:30:18.040496 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:30:18.040509 | orchestrator | Saturday 14 February 2026 06:29:55 +0000 (0:00:01.125) 0:53:07.621 ***** 2026-02-14 06:30:18.040519 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040530 | orchestrator | 2026-02-14 06:30:18.040540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:30:18.040551 | orchestrator | Saturday 14 February 2026 06:29:56 +0000 (0:00:01.133) 0:53:08.754 ***** 2026-02-14 06:30:18.040562 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040572 | orchestrator | 2026-02-14 06:30:18.040583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:30:18.040594 | orchestrator | Saturday 14 February 2026 06:29:57 +0000 (0:00:01.195) 0:53:09.950 ***** 2026-02-14 06:30:18.040604 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040615 | orchestrator | 2026-02-14 06:30:18.040626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:30:18.040637 | orchestrator | Saturday 14 February 2026 06:29:58 +0000 (0:00:01.178) 0:53:11.129 ***** 2026-02-14 06:30:18.040647 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.040658 | orchestrator | 2026-02-14 06:30:18.040669 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:30:18.040680 | orchestrator | Saturday 14 February 2026 06:30:00 +0000 (0:00:01.263) 0:53:12.392 ***** 2026-02-14 06:30:18.040691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:30:18.040729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:30:18.040742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:30:18.040753 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040764 | orchestrator | 2026-02-14 06:30:18.040774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:30:18.040785 | orchestrator | Saturday 14 February 2026 06:30:01 +0000 (0:00:01.461) 0:53:13.854 ***** 2026-02-14 06:30:18.040796 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:30:18.040807 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:30:18.040931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:30:18.040949 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.040960 | orchestrator | 2026-02-14 06:30:18.040971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:30:18.040987 | orchestrator | Saturday 14 February 2026 06:30:02 +0000 (0:00:01.422) 0:53:15.276 ***** 2026-02-14 06:30:18.040998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:30:18.041008 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:30:18.041019 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:30:18.041029 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.041040 | orchestrator | 2026-02-14 06:30:18.041051 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:30:18.041061 | orchestrator | Saturday 14 February 2026 06:30:04 +0000 (0:00:01.462) 0:53:16.739 ***** 2026-02-14 06:30:18.041072 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.041083 | orchestrator | 2026-02-14 06:30:18.041093 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:30:18.041104 | orchestrator | Saturday 14 February 2026 06:30:05 +0000 (0:00:01.187) 0:53:17.926 ***** 2026-02-14 06:30:18.041115 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:30:18.041125 | orchestrator | 2026-02-14 06:30:18.041136 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:30:18.041146 | orchestrator | Saturday 14 February 2026 06:30:07 +0000 (0:00:01.751) 0:53:19.678 ***** 2026-02-14 06:30:18.041157 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:30:18.041167 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:30:18.041178 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:30:18.041188 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:30:18.041199 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:30:18.041210 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:30:18.041220 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:30:18.041231 | orchestrator | 2026-02-14 06:30:18.041242 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:30:18.041252 | orchestrator | Saturday 14 February 2026 06:30:09 +0000 (0:00:01.945) 0:53:21.623 ***** 2026-02-14 06:30:18.041262 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:30:18.041273 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:30:18.041284 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:30:18.041294 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:30:18.041305 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:30:18.041315 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:30:18.041334 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:30:18.041345 | orchestrator | 2026-02-14 06:30:18.041356 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-14 06:30:18.041366 | orchestrator | Saturday 14 February 2026 06:30:11 +0000 (0:00:02.521) 0:53:24.145 ***** 2026-02-14 06:30:18.041377 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.041388 | orchestrator | 2026-02-14 06:30:18.041399 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:30:18.041409 | orchestrator | Saturday 14 February 2026 06:30:12 +0000 (0:00:01.151) 0:53:25.296 ***** 2026-02-14 06:30:18.041420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-14 06:30:18.041431 | orchestrator | 2026-02-14 06:30:18.041442 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:30:18.041452 | orchestrator | Saturday 14 February 2026 06:30:14 +0000 (0:00:01.186) 0:53:26.483 ***** 2026-02-14 06:30:18.041463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-14 06:30:18.041474 | orchestrator | 2026-02-14 06:30:18.041484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:30:18.041495 | orchestrator | Saturday 14 February 2026 06:30:15 +0000 (0:00:01.197) 0:53:27.681 ***** 2026-02-14 06:30:18.041506 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:30:18.041516 | orchestrator | 2026-02-14 06:30:18.041527 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:30:18.041538 | orchestrator | Saturday 14 February 2026 06:30:16 +0000 (0:00:01.161) 0:53:28.843 ***** 2026-02-14 06:30:18.041548 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:30:18.041559 | orchestrator | 2026-02-14 06:30:18.041570 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:30:18.041589 | orchestrator | Saturday 14 February 2026 06:30:18 +0000 (0:00:01.508) 0:53:30.351 ***** 2026-02-14 06:31:09.828344 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828431 | orchestrator | 2026-02-14 06:31:09.828442 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:31:09.828450 | orchestrator | Saturday 14 February 2026 06:30:19 +0000 (0:00:01.628) 0:53:31.980 ***** 2026-02-14 06:31:09.828457 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828464 | orchestrator | 2026-02-14 06:31:09.828470 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:31:09.828477 | orchestrator | Saturday 14 February 2026 06:30:21 +0000 (0:00:01.630) 0:53:33.611 ***** 2026-02-14 06:31:09.828483 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828490 | orchestrator | 2026-02-14 06:31:09.828497 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:31:09.828503 | orchestrator | Saturday 14 February 2026 06:30:22 +0000 (0:00:01.102) 0:53:34.713 ***** 2026-02-14 06:31:09.828523 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828529 | orchestrator | 2026-02-14 06:31:09.828536 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:31:09.828543 | orchestrator | Saturday 14 February 2026 06:30:23 +0000 (0:00:01.253) 0:53:35.967 ***** 2026-02-14 06:31:09.828549 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828555 | orchestrator | 2026-02-14 06:31:09.828561 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:31:09.828568 | orchestrator | Saturday 14 February 2026 06:30:24 +0000 (0:00:01.170) 0:53:37.137 ***** 2026-02-14 06:31:09.828574 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828580 | orchestrator | 2026-02-14 06:31:09.828586 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:31:09.828593 | orchestrator | Saturday 14 February 2026 06:30:26 +0000 (0:00:01.556) 0:53:38.693 ***** 2026-02-14 06:31:09.828599 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828605 | orchestrator | 2026-02-14 06:31:09.828612 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:31:09.828637 | orchestrator | Saturday 14 February 2026 06:30:27 +0000 (0:00:01.526) 0:53:40.220 ***** 2026-02-14 06:31:09.828643 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828650 | orchestrator | 2026-02-14 06:31:09.828656 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:31:09.828662 | orchestrator | Saturday 14 February 2026 06:30:29 +0000 (0:00:01.176) 0:53:41.396 ***** 2026-02-14 06:31:09.828668 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828674 | orchestrator | 2026-02-14 06:31:09.828681 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:31:09.828687 | orchestrator | Saturday 14 February 2026 06:30:30 +0000 (0:00:01.227) 0:53:42.623 ***** 2026-02-14 06:31:09.828693 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828699 | orchestrator | 2026-02-14 06:31:09.828705 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:31:09.828712 | orchestrator | Saturday 14 February 2026 06:30:31 +0000 (0:00:01.169) 0:53:43.793 ***** 2026-02-14 06:31:09.828718 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828724 | orchestrator | 2026-02-14 06:31:09.828730 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:31:09.828736 | orchestrator | Saturday 14 February 2026 06:30:32 +0000 (0:00:01.162) 0:53:44.955 ***** 2026-02-14 06:31:09.828742 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828749 | orchestrator | 2026-02-14 06:31:09.828755 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:31:09.828761 | orchestrator | Saturday 14 February 2026 06:30:33 +0000 (0:00:01.211) 0:53:46.167 ***** 2026-02-14 06:31:09.828767 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828773 | orchestrator | 2026-02-14 06:31:09.828780 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:31:09.828786 | orchestrator | Saturday 14 February 2026 06:30:34 +0000 (0:00:01.129) 0:53:47.296 ***** 2026-02-14 06:31:09.828792 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828798 | orchestrator | 2026-02-14 06:31:09.828804 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:31:09.828811 | orchestrator | Saturday 14 February 2026 06:30:36 +0000 (0:00:01.158) 0:53:48.455 ***** 2026-02-14 06:31:09.828817 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828823 | orchestrator | 2026-02-14 06:31:09.828829 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:31:09.828836 | orchestrator | Saturday 14 February 2026 06:30:37 +0000 (0:00:01.182) 0:53:49.638 ***** 2026-02-14 06:31:09.828842 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828848 | orchestrator | 2026-02-14 06:31:09.828854 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:31:09.828861 | orchestrator | Saturday 14 February 2026 06:30:38 +0000 (0:00:01.180) 0:53:50.819 ***** 2026-02-14 06:31:09.828867 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.828873 | orchestrator | 2026-02-14 06:31:09.828919 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:31:09.828926 | orchestrator | Saturday 14 February 2026 06:30:39 +0000 (0:00:01.355) 0:53:52.174 ***** 2026-02-14 06:31:09.828933 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828941 | orchestrator | 2026-02-14 06:31:09.828948 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:31:09.828955 | orchestrator | Saturday 14 February 2026 06:30:40 +0000 (0:00:01.144) 0:53:53.318 ***** 2026-02-14 06:31:09.828963 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828970 | orchestrator | 2026-02-14 06:31:09.828977 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:31:09.828985 | orchestrator | Saturday 14 February 2026 06:30:42 +0000 (0:00:01.180) 0:53:54.499 ***** 2026-02-14 06:31:09.828992 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.828999 | orchestrator | 2026-02-14 06:31:09.829007 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:31:09.829020 | orchestrator | Saturday 14 February 2026 06:30:43 +0000 (0:00:01.259) 0:53:55.759 ***** 2026-02-14 06:31:09.829027 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829035 | orchestrator | 2026-02-14 06:31:09.829042 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:31:09.829062 | orchestrator | Saturday 14 February 2026 06:30:44 +0000 (0:00:01.244) 0:53:57.003 ***** 2026-02-14 06:31:09.829069 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829077 | orchestrator | 2026-02-14 06:31:09.829084 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:31:09.829091 | orchestrator | Saturday 14 February 2026 06:30:45 +0000 (0:00:01.159) 0:53:58.162 ***** 2026-02-14 06:31:09.829097 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829104 | orchestrator | 2026-02-14 06:31:09.829110 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:31:09.829116 | orchestrator | Saturday 14 February 2026 06:30:46 +0000 (0:00:01.141) 0:53:59.304 ***** 2026-02-14 06:31:09.829122 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829128 | orchestrator | 2026-02-14 06:31:09.829135 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:31:09.829146 | orchestrator | Saturday 14 February 2026 06:30:48 +0000 (0:00:01.163) 0:54:00.467 ***** 2026-02-14 06:31:09.829152 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829158 | orchestrator | 2026-02-14 06:31:09.829164 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:31:09.829171 | orchestrator | Saturday 14 February 2026 06:30:49 +0000 (0:00:01.220) 0:54:01.688 ***** 2026-02-14 06:31:09.829177 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829183 | orchestrator | 2026-02-14 06:31:09.829189 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:31:09.829196 | orchestrator | Saturday 14 February 2026 06:30:50 +0000 (0:00:01.107) 0:54:02.796 ***** 2026-02-14 06:31:09.829202 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829208 | orchestrator | 2026-02-14 06:31:09.829214 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:31:09.829220 | orchestrator | Saturday 14 February 2026 06:30:51 +0000 (0:00:01.111) 0:54:03.907 ***** 2026-02-14 06:31:09.829226 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829233 | orchestrator | 2026-02-14 06:31:09.829239 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:31:09.829245 | orchestrator | Saturday 14 February 2026 06:30:52 +0000 (0:00:01.102) 0:54:05.010 ***** 2026-02-14 06:31:09.829251 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829257 | orchestrator | 2026-02-14 06:31:09.829263 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:31:09.829270 | orchestrator | Saturday 14 February 2026 06:30:53 +0000 (0:00:01.292) 0:54:06.303 ***** 2026-02-14 06:31:09.829276 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.829282 | orchestrator | 2026-02-14 06:31:09.829288 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:31:09.829295 | orchestrator | Saturday 14 February 2026 06:30:55 +0000 (0:00:01.988) 0:54:08.292 ***** 2026-02-14 06:31:09.829301 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.829307 | orchestrator | 2026-02-14 06:31:09.829313 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:31:09.829319 | orchestrator | Saturday 14 February 2026 06:30:58 +0000 (0:00:02.299) 0:54:10.591 ***** 2026-02-14 06:31:09.829326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-14 06:31:09.829333 | orchestrator | 2026-02-14 06:31:09.829339 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:31:09.829346 | orchestrator | Saturday 14 February 2026 06:30:59 +0000 (0:00:01.161) 0:54:11.753 ***** 2026-02-14 06:31:09.829352 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829363 | orchestrator | 2026-02-14 06:31:09.829369 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:31:09.829375 | orchestrator | Saturday 14 February 2026 06:31:00 +0000 (0:00:01.116) 0:54:12.870 ***** 2026-02-14 06:31:09.829381 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829387 | orchestrator | 2026-02-14 06:31:09.829394 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:31:09.829400 | orchestrator | Saturday 14 February 2026 06:31:01 +0000 (0:00:01.164) 0:54:14.034 ***** 2026-02-14 06:31:09.829406 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:31:09.829412 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:31:09.829419 | orchestrator | 2026-02-14 06:31:09.829425 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:31:09.829431 | orchestrator | Saturday 14 February 2026 06:31:03 +0000 (0:00:01.859) 0:54:15.893 ***** 2026-02-14 06:31:09.829437 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:09.829443 | orchestrator | 2026-02-14 06:31:09.829449 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:31:09.829456 | orchestrator | Saturday 14 February 2026 06:31:05 +0000 (0:00:01.438) 0:54:17.332 ***** 2026-02-14 06:31:09.829462 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829468 | orchestrator | 2026-02-14 06:31:09.829474 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:31:09.829480 | orchestrator | Saturday 14 February 2026 06:31:06 +0000 (0:00:01.189) 0:54:18.522 ***** 2026-02-14 06:31:09.829486 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829492 | orchestrator | 2026-02-14 06:31:09.829499 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:31:09.829505 | orchestrator | Saturday 14 February 2026 06:31:07 +0000 (0:00:01.154) 0:54:19.677 ***** 2026-02-14 06:31:09.829511 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:09.829517 | orchestrator | 2026-02-14 06:31:09.829523 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:31:09.829529 | orchestrator | Saturday 14 February 2026 06:31:08 +0000 (0:00:01.180) 0:54:20.857 ***** 2026-02-14 06:31:09.829536 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-14 06:31:09.829542 | orchestrator | 2026-02-14 06:31:09.829548 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:31:09.829558 | orchestrator | Saturday 14 February 2026 06:31:09 +0000 (0:00:01.280) 0:54:22.138 ***** 2026-02-14 06:31:57.355588 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:57.355706 | orchestrator | 2026-02-14 06:31:57.355721 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:31:57.355734 | orchestrator | Saturday 14 February 2026 06:31:11 +0000 (0:00:01.733) 0:54:23.872 ***** 2026-02-14 06:31:57.355746 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:31:57.355757 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:31:57.355767 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:31:57.355778 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.355790 | orchestrator | 2026-02-14 06:31:57.355816 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:31:57.355828 | orchestrator | Saturday 14 February 2026 06:31:12 +0000 (0:00:01.231) 0:54:25.103 ***** 2026-02-14 06:31:57.355838 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.355849 | orchestrator | 2026-02-14 06:31:57.355860 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:31:57.355871 | orchestrator | Saturday 14 February 2026 06:31:13 +0000 (0:00:01.154) 0:54:26.258 ***** 2026-02-14 06:31:57.355881 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.355919 | orchestrator | 2026-02-14 06:31:57.356006 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:31:57.356019 | orchestrator | Saturday 14 February 2026 06:31:15 +0000 (0:00:01.198) 0:54:27.457 ***** 2026-02-14 06:31:57.356029 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356040 | orchestrator | 2026-02-14 06:31:57.356052 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:31:57.356063 | orchestrator | Saturday 14 February 2026 06:31:16 +0000 (0:00:01.300) 0:54:28.757 ***** 2026-02-14 06:31:57.356074 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356085 | orchestrator | 2026-02-14 06:31:57.356095 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:31:57.356106 | orchestrator | Saturday 14 February 2026 06:31:17 +0000 (0:00:01.164) 0:54:29.922 ***** 2026-02-14 06:31:57.356119 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356132 | orchestrator | 2026-02-14 06:31:57.356144 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:31:57.356156 | orchestrator | Saturday 14 February 2026 06:31:18 +0000 (0:00:01.182) 0:54:31.105 ***** 2026-02-14 06:31:57.356169 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:57.356181 | orchestrator | 2026-02-14 06:31:57.356193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:31:57.356206 | orchestrator | Saturday 14 February 2026 06:31:21 +0000 (0:00:02.529) 0:54:33.634 ***** 2026-02-14 06:31:57.356218 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:57.356230 | orchestrator | 2026-02-14 06:31:57.356242 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:31:57.356254 | orchestrator | Saturday 14 February 2026 06:31:22 +0000 (0:00:01.155) 0:54:34.790 ***** 2026-02-14 06:31:57.356267 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-14 06:31:57.356279 | orchestrator | 2026-02-14 06:31:57.356291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:31:57.356303 | orchestrator | Saturday 14 February 2026 06:31:23 +0000 (0:00:01.128) 0:54:35.918 ***** 2026-02-14 06:31:57.356316 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356328 | orchestrator | 2026-02-14 06:31:57.356340 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:31:57.356353 | orchestrator | Saturday 14 February 2026 06:31:24 +0000 (0:00:01.152) 0:54:37.070 ***** 2026-02-14 06:31:57.356365 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356377 | orchestrator | 2026-02-14 06:31:57.356389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:31:57.356402 | orchestrator | Saturday 14 February 2026 06:31:25 +0000 (0:00:01.257) 0:54:38.328 ***** 2026-02-14 06:31:57.356412 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356423 | orchestrator | 2026-02-14 06:31:57.356437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:31:57.356456 | orchestrator | Saturday 14 February 2026 06:31:27 +0000 (0:00:01.258) 0:54:39.586 ***** 2026-02-14 06:31:57.356474 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356492 | orchestrator | 2026-02-14 06:31:57.356510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:31:57.356527 | orchestrator | Saturday 14 February 2026 06:31:28 +0000 (0:00:01.190) 0:54:40.777 ***** 2026-02-14 06:31:57.356544 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356559 | orchestrator | 2026-02-14 06:31:57.356575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:31:57.356594 | orchestrator | Saturday 14 February 2026 06:31:29 +0000 (0:00:01.182) 0:54:41.959 ***** 2026-02-14 06:31:57.356613 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356632 | orchestrator | 2026-02-14 06:31:57.356652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:31:57.356670 | orchestrator | Saturday 14 February 2026 06:31:30 +0000 (0:00:01.141) 0:54:43.101 ***** 2026-02-14 06:31:57.356695 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356705 | orchestrator | 2026-02-14 06:31:57.356716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:31:57.356727 | orchestrator | Saturday 14 February 2026 06:31:31 +0000 (0:00:01.156) 0:54:44.257 ***** 2026-02-14 06:31:57.356738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.356748 | orchestrator | 2026-02-14 06:31:57.356759 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:31:57.356770 | orchestrator | Saturday 14 February 2026 06:31:33 +0000 (0:00:01.169) 0:54:45.427 ***** 2026-02-14 06:31:57.356781 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:31:57.356792 | orchestrator | 2026-02-14 06:31:57.356803 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:31:57.356833 | orchestrator | Saturday 14 February 2026 06:31:34 +0000 (0:00:01.136) 0:54:46.564 ***** 2026-02-14 06:31:57.356845 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-14 06:31:57.356857 | orchestrator | 2026-02-14 06:31:57.356868 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:31:57.356879 | orchestrator | Saturday 14 February 2026 06:31:35 +0000 (0:00:01.139) 0:54:47.703 ***** 2026-02-14 06:31:57.356890 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-14 06:31:57.356901 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-14 06:31:57.356912 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-14 06:31:57.356922 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-14 06:31:57.356974 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-14 06:31:57.356994 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-14 06:31:57.357014 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-14 06:31:57.357033 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:31:57.357053 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:31:57.357072 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:31:57.357092 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:31:57.357114 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:31:57.357133 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:31:57.357402 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:31:57.357426 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-14 06:31:57.357438 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-14 06:31:57.357449 | orchestrator | 2026-02-14 06:31:57.357459 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:31:57.357470 | orchestrator | Saturday 14 February 2026 06:31:42 +0000 (0:00:06.632) 0:54:54.336 ***** 2026-02-14 06:31:57.357481 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-14 06:31:57.357492 | orchestrator | 2026-02-14 06:31:57.357503 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:31:57.357514 | orchestrator | Saturday 14 February 2026 06:31:43 +0000 (0:00:01.228) 0:54:55.564 ***** 2026-02-14 06:31:57.357525 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:31:57.357537 | orchestrator | 2026-02-14 06:31:57.357548 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:31:57.357559 | orchestrator | Saturday 14 February 2026 06:31:44 +0000 (0:00:01.556) 0:54:57.121 ***** 2026-02-14 06:31:57.357570 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:31:57.357581 | orchestrator | 2026-02-14 06:31:57.357592 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:31:57.357614 | orchestrator | Saturday 14 February 2026 06:31:46 +0000 (0:00:02.040) 0:54:59.162 ***** 2026-02-14 06:31:57.357625 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357636 | orchestrator | 2026-02-14 06:31:57.357646 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:31:57.357657 | orchestrator | Saturday 14 February 2026 06:31:47 +0000 (0:00:01.159) 0:55:00.321 ***** 2026-02-14 06:31:57.357668 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357679 | orchestrator | 2026-02-14 06:31:57.357690 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:31:57.357701 | orchestrator | Saturday 14 February 2026 06:31:49 +0000 (0:00:01.152) 0:55:01.474 ***** 2026-02-14 06:31:57.357711 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357722 | orchestrator | 2026-02-14 06:31:57.357733 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:31:57.357744 | orchestrator | Saturday 14 February 2026 06:31:50 +0000 (0:00:01.133) 0:55:02.608 ***** 2026-02-14 06:31:57.357755 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357766 | orchestrator | 2026-02-14 06:31:57.357776 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:31:57.357791 | orchestrator | Saturday 14 February 2026 06:31:51 +0000 (0:00:01.217) 0:55:03.825 ***** 2026-02-14 06:31:57.357810 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357827 | orchestrator | 2026-02-14 06:31:57.357845 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:31:57.357863 | orchestrator | Saturday 14 February 2026 06:31:52 +0000 (0:00:01.170) 0:55:04.996 ***** 2026-02-14 06:31:57.357883 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357901 | orchestrator | 2026-02-14 06:31:57.357918 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:31:57.357958 | orchestrator | Saturday 14 February 2026 06:31:53 +0000 (0:00:01.138) 0:55:06.134 ***** 2026-02-14 06:31:57.357970 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.357981 | orchestrator | 2026-02-14 06:31:57.357991 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:31:57.358002 | orchestrator | Saturday 14 February 2026 06:31:54 +0000 (0:00:01.163) 0:55:07.298 ***** 2026-02-14 06:31:57.358013 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.358084 | orchestrator | 2026-02-14 06:31:57.358095 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:31:57.358106 | orchestrator | Saturday 14 February 2026 06:31:56 +0000 (0:00:01.142) 0:55:08.440 ***** 2026-02-14 06:31:57.358117 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:31:57.358128 | orchestrator | 2026-02-14 06:31:57.358151 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:32:54.118088 | orchestrator | Saturday 14 February 2026 06:31:57 +0000 (0:00:01.228) 0:55:09.668 ***** 2026-02-14 06:32:54.118168 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118175 | orchestrator | 2026-02-14 06:32:54.118180 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:32:54.118185 | orchestrator | Saturday 14 February 2026 06:31:58 +0000 (0:00:01.160) 0:55:10.829 ***** 2026-02-14 06:32:54.118189 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118193 | orchestrator | 2026-02-14 06:32:54.118197 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:32:54.118202 | orchestrator | Saturday 14 February 2026 06:31:59 +0000 (0:00:01.156) 0:55:11.986 ***** 2026-02-14 06:32:54.118216 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:32:54.118221 | orchestrator | 2026-02-14 06:32:54.118224 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:32:54.118228 | orchestrator | Saturday 14 February 2026 06:32:04 +0000 (0:00:04.762) 0:55:16.749 ***** 2026-02-14 06:32:54.118248 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:32:54.118253 | orchestrator | 2026-02-14 06:32:54.118257 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:32:54.118261 | orchestrator | Saturday 14 February 2026 06:32:05 +0000 (0:00:01.245) 0:55:17.995 ***** 2026-02-14 06:32:54.118266 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-14 06:32:54.118273 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-14 06:32:54.118277 | orchestrator | 2026-02-14 06:32:54.118281 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:32:54.118285 | orchestrator | Saturday 14 February 2026 06:32:10 +0000 (0:00:04.702) 0:55:22.697 ***** 2026-02-14 06:32:54.118289 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118292 | orchestrator | 2026-02-14 06:32:54.118296 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:32:54.118300 | orchestrator | Saturday 14 February 2026 06:32:11 +0000 (0:00:01.212) 0:55:23.910 ***** 2026-02-14 06:32:54.118304 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118308 | orchestrator | 2026-02-14 06:32:54.118312 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:32:54.118315 | orchestrator | Saturday 14 February 2026 06:32:12 +0000 (0:00:01.193) 0:55:25.103 ***** 2026-02-14 06:32:54.118319 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118323 | orchestrator | 2026-02-14 06:32:54.118327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:32:54.118330 | orchestrator | Saturday 14 February 2026 06:32:13 +0000 (0:00:01.172) 0:55:26.276 ***** 2026-02-14 06:32:54.118334 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118338 | orchestrator | 2026-02-14 06:32:54.118342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:32:54.118346 | orchestrator | Saturday 14 February 2026 06:32:15 +0000 (0:00:01.134) 0:55:27.410 ***** 2026-02-14 06:32:54.118349 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118353 | orchestrator | 2026-02-14 06:32:54.118357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:32:54.118361 | orchestrator | Saturday 14 February 2026 06:32:16 +0000 (0:00:01.175) 0:55:28.586 ***** 2026-02-14 06:32:54.118365 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118370 | orchestrator | 2026-02-14 06:32:54.118374 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:32:54.118377 | orchestrator | Saturday 14 February 2026 06:32:17 +0000 (0:00:01.291) 0:55:29.877 ***** 2026-02-14 06:32:54.118381 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:32:54.118386 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:32:54.118390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:32:54.118393 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118397 | orchestrator | 2026-02-14 06:32:54.118401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:32:54.118405 | orchestrator | Saturday 14 February 2026 06:32:18 +0000 (0:00:01.414) 0:55:31.292 ***** 2026-02-14 06:32:54.118408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:32:54.118412 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:32:54.118420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:32:54.118423 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118427 | orchestrator | 2026-02-14 06:32:54.118431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:32:54.118435 | orchestrator | Saturday 14 February 2026 06:32:20 +0000 (0:00:01.464) 0:55:32.756 ***** 2026-02-14 06:32:54.118439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:32:54.118442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:32:54.118446 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:32:54.118459 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118463 | orchestrator | 2026-02-14 06:32:54.118467 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:32:54.118471 | orchestrator | Saturday 14 February 2026 06:32:22 +0000 (0:00:01.802) 0:55:34.559 ***** 2026-02-14 06:32:54.118474 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118478 | orchestrator | 2026-02-14 06:32:54.118482 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:32:54.118486 | orchestrator | Saturday 14 February 2026 06:32:23 +0000 (0:00:01.164) 0:55:35.724 ***** 2026-02-14 06:32:54.118490 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:32:54.118493 | orchestrator | 2026-02-14 06:32:54.118500 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:32:54.118504 | orchestrator | Saturday 14 February 2026 06:32:25 +0000 (0:00:01.871) 0:55:37.595 ***** 2026-02-14 06:32:54.118508 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118511 | orchestrator | 2026-02-14 06:32:54.118515 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-14 06:32:54.118519 | orchestrator | Saturday 14 February 2026 06:32:27 +0000 (0:00:01.755) 0:55:39.351 ***** 2026-02-14 06:32:54.118523 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118526 | orchestrator | 2026-02-14 06:32:54.118530 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-14 06:32:54.118534 | orchestrator | Saturday 14 February 2026 06:32:28 +0000 (0:00:01.145) 0:55:40.497 ***** 2026-02-14 06:32:54.118538 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-02-14 06:32:54.118541 | orchestrator | 2026-02-14 06:32:54.118545 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-14 06:32:54.118549 | orchestrator | Saturday 14 February 2026 06:32:29 +0000 (0:00:01.483) 0:55:41.980 ***** 2026-02-14 06:32:54.118552 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 06:32:54.118556 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-14 06:32:54.118560 | orchestrator | 2026-02-14 06:32:54.118564 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-14 06:32:54.118567 | orchestrator | Saturday 14 February 2026 06:32:31 +0000 (0:00:01.833) 0:55:43.813 ***** 2026-02-14 06:32:54.118571 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:32:54.118575 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:32:54.118579 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:32:54.118583 | orchestrator | 2026-02-14 06:32:54.118586 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:32:54.118590 | orchestrator | Saturday 14 February 2026 06:32:34 +0000 (0:00:03.230) 0:55:47.044 ***** 2026-02-14 06:32:54.118594 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-14 06:32:54.118598 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:32:54.118601 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118605 | orchestrator | 2026-02-14 06:32:54.118609 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-14 06:32:54.118612 | orchestrator | Saturday 14 February 2026 06:32:36 +0000 (0:00:01.997) 0:55:49.041 ***** 2026-02-14 06:32:54.118619 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118623 | orchestrator | 2026-02-14 06:32:54.118627 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-14 06:32:54.118630 | orchestrator | Saturday 14 February 2026 06:32:38 +0000 (0:00:01.519) 0:55:50.561 ***** 2026-02-14 06:32:54.118634 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:32:54.118638 | orchestrator | 2026-02-14 06:32:54.118642 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-14 06:32:54.118645 | orchestrator | Saturday 14 February 2026 06:32:39 +0000 (0:00:01.203) 0:55:51.765 ***** 2026-02-14 06:32:54.118649 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-02-14 06:32:54.118653 | orchestrator | 2026-02-14 06:32:54.118657 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-14 06:32:54.118661 | orchestrator | Saturday 14 February 2026 06:32:41 +0000 (0:00:01.645) 0:55:53.410 ***** 2026-02-14 06:32:54.118665 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-02-14 06:32:54.118668 | orchestrator | 2026-02-14 06:32:54.118672 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-14 06:32:54.118676 | orchestrator | Saturday 14 February 2026 06:32:42 +0000 (0:00:01.528) 0:55:54.938 ***** 2026-02-14 06:32:54.118679 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118683 | orchestrator | 2026-02-14 06:32:54.118687 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-14 06:32:54.118691 | orchestrator | Saturday 14 February 2026 06:32:44 +0000 (0:00:02.129) 0:55:57.068 ***** 2026-02-14 06:32:54.118695 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118698 | orchestrator | 2026-02-14 06:32:54.118702 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-14 06:32:54.118706 | orchestrator | Saturday 14 February 2026 06:32:46 +0000 (0:00:01.954) 0:55:59.022 ***** 2026-02-14 06:32:54.118710 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118713 | orchestrator | 2026-02-14 06:32:54.118717 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-14 06:32:54.118721 | orchestrator | Saturday 14 February 2026 06:32:49 +0000 (0:00:02.316) 0:56:01.338 ***** 2026-02-14 06:32:54.118725 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118728 | orchestrator | 2026-02-14 06:32:54.118732 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-14 06:32:54.118736 | orchestrator | Saturday 14 February 2026 06:32:51 +0000 (0:00:02.256) 0:56:03.595 ***** 2026-02-14 06:32:54.118740 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:32:54.118743 | orchestrator | 2026-02-14 06:32:54.118747 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-14 06:32:54.118751 | orchestrator | Saturday 14 February 2026 06:32:52 +0000 (0:00:01.627) 0:56:05.222 ***** 2026-02-14 06:32:54.118757 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:33:33.034997 | orchestrator | 2026-02-14 06:33:33.035095 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-14 06:33:33.035103 | orchestrator | Saturday 14 February 2026 06:32:54 +0000 (0:00:01.206) 0:56:06.429 ***** 2026-02-14 06:33:33.035107 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:33:33.035113 | orchestrator | 2026-02-14 06:33:33.035117 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-14 06:33:33.035122 | orchestrator | 2026-02-14 06:33:33.035126 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:33:33.035130 | orchestrator | Saturday 14 February 2026 06:33:05 +0000 (0:00:11.382) 0:56:17.811 ***** 2026-02-14 06:33:33.035147 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5, testbed-node-3 2026-02-14 06:33:33.035152 | orchestrator | 2026-02-14 06:33:33.035156 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:33:33.035160 | orchestrator | Saturday 14 February 2026 06:33:07 +0000 (0:00:01.933) 0:56:19.744 ***** 2026-02-14 06:33:33.035211 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035216 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035220 | orchestrator | 2026-02-14 06:33:33.035224 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:33:33.035228 | orchestrator | Saturday 14 February 2026 06:33:09 +0000 (0:00:01.924) 0:56:21.669 ***** 2026-02-14 06:33:33.035232 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035236 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035240 | orchestrator | 2026-02-14 06:33:33.035245 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:33:33.035249 | orchestrator | Saturday 14 February 2026 06:33:10 +0000 (0:00:01.613) 0:56:23.283 ***** 2026-02-14 06:33:33.035253 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035257 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035261 | orchestrator | 2026-02-14 06:33:33.035265 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:33:33.035269 | orchestrator | Saturday 14 February 2026 06:33:12 +0000 (0:00:01.960) 0:56:25.244 ***** 2026-02-14 06:33:33.035273 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035277 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035281 | orchestrator | 2026-02-14 06:33:33.035285 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:33:33.035290 | orchestrator | Saturday 14 February 2026 06:33:14 +0000 (0:00:01.632) 0:56:26.877 ***** 2026-02-14 06:33:33.035294 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035298 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035302 | orchestrator | 2026-02-14 06:33:33.035306 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:33:33.035311 | orchestrator | Saturday 14 February 2026 06:33:16 +0000 (0:00:01.523) 0:56:28.401 ***** 2026-02-14 06:33:33.035315 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035319 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035323 | orchestrator | 2026-02-14 06:33:33.035327 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:33:33.035331 | orchestrator | Saturday 14 February 2026 06:33:17 +0000 (0:00:01.551) 0:56:29.953 ***** 2026-02-14 06:33:33.035336 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:33.035340 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:33.035344 | orchestrator | 2026-02-14 06:33:33.035349 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:33:33.035353 | orchestrator | Saturday 14 February 2026 06:33:19 +0000 (0:00:01.629) 0:56:31.582 ***** 2026-02-14 06:33:33.035357 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035361 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035365 | orchestrator | 2026-02-14 06:33:33.035369 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:33:33.035373 | orchestrator | Saturday 14 February 2026 06:33:20 +0000 (0:00:01.543) 0:56:33.126 ***** 2026-02-14 06:33:33.035377 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:33:33.035381 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:33:33.035385 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:33:33.035390 | orchestrator | 2026-02-14 06:33:33.035394 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:33:33.035398 | orchestrator | Saturday 14 February 2026 06:33:22 +0000 (0:00:01.704) 0:56:34.830 ***** 2026-02-14 06:33:33.035402 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:33.035406 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:33.035410 | orchestrator | 2026-02-14 06:33:33.035414 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:33:33.035418 | orchestrator | Saturday 14 February 2026 06:33:23 +0000 (0:00:01.475) 0:56:36.305 ***** 2026-02-14 06:33:33.035422 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:33:33.035430 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:33:33.035435 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:33:33.035439 | orchestrator | 2026-02-14 06:33:33.035444 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:33:33.035448 | orchestrator | Saturday 14 February 2026 06:33:27 +0000 (0:00:03.242) 0:56:39.548 ***** 2026-02-14 06:33:33.035452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:33:33.035457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:33:33.035461 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:33:33.035465 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:33.035469 | orchestrator | 2026-02-14 06:33:33.035473 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:33:33.035478 | orchestrator | Saturday 14 February 2026 06:33:28 +0000 (0:00:01.414) 0:56:40.963 ***** 2026-02-14 06:33:33.035494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035512 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:33.035516 | orchestrator | 2026-02-14 06:33:33.035520 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:33:33.035525 | orchestrator | Saturday 14 February 2026 06:33:30 +0000 (0:00:01.979) 0:56:42.942 ***** 2026-02-14 06:33:33.035530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:33.035545 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:33.035549 | orchestrator | 2026-02-14 06:33:33.035554 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:33:33.035558 | orchestrator | Saturday 14 February 2026 06:33:31 +0000 (0:00:01.168) 0:56:44.111 ***** 2026-02-14 06:33:33.035564 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:33:24.550508', 'end': '2026-02-14 06:33:24.600142', 'delta': '0:00:00.049634', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:33:33.035575 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:33:25.439185', 'end': '2026-02-14 06:33:25.484633', 'delta': '0:00:00.045448', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:33:33.035588 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:33:25.985933', 'end': '2026-02-14 06:33:26.028664', 'delta': '0:00:00.042731', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:33:53.649135 | orchestrator | 2026-02-14 06:33:53.649287 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:33:53.649312 | orchestrator | Saturday 14 February 2026 06:33:33 +0000 (0:00:01.235) 0:56:45.347 ***** 2026-02-14 06:33:53.649324 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.649337 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.649348 | orchestrator | 2026-02-14 06:33:53.649359 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:33:53.649370 | orchestrator | Saturday 14 February 2026 06:33:34 +0000 (0:00:01.400) 0:56:46.747 ***** 2026-02-14 06:33:53.649381 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649393 | orchestrator | 2026-02-14 06:33:53.649403 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:33:53.649414 | orchestrator | Saturday 14 February 2026 06:33:35 +0000 (0:00:01.270) 0:56:48.018 ***** 2026-02-14 06:33:53.649425 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.649436 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.649446 | orchestrator | 2026-02-14 06:33:53.649457 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:33:53.649468 | orchestrator | Saturday 14 February 2026 06:33:36 +0000 (0:00:01.268) 0:56:49.287 ***** 2026-02-14 06:33:53.649479 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:33:53.649491 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:33:53.649501 | orchestrator | 2026-02-14 06:33:53.649512 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:33:53.649523 | orchestrator | Saturday 14 February 2026 06:33:40 +0000 (0:00:03.220) 0:56:52.507 ***** 2026-02-14 06:33:53.649534 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.649544 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.649555 | orchestrator | 2026-02-14 06:33:53.649567 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:33:53.649607 | orchestrator | Saturday 14 February 2026 06:33:41 +0000 (0:00:01.300) 0:56:53.807 ***** 2026-02-14 06:33:53.649620 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649633 | orchestrator | 2026-02-14 06:33:53.649646 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:33:53.649658 | orchestrator | Saturday 14 February 2026 06:33:42 +0000 (0:00:01.153) 0:56:54.960 ***** 2026-02-14 06:33:53.649670 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649684 | orchestrator | 2026-02-14 06:33:53.649696 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:33:53.649709 | orchestrator | Saturday 14 February 2026 06:33:44 +0000 (0:00:01.578) 0:56:56.539 ***** 2026-02-14 06:33:53.649721 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649733 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:53.649746 | orchestrator | 2026-02-14 06:33:53.649758 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:33:53.649770 | orchestrator | Saturday 14 February 2026 06:33:45 +0000 (0:00:01.374) 0:56:57.913 ***** 2026-02-14 06:33:53.649783 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649796 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:53.649808 | orchestrator | 2026-02-14 06:33:53.649821 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:33:53.649833 | orchestrator | Saturday 14 February 2026 06:33:46 +0000 (0:00:01.247) 0:56:59.161 ***** 2026-02-14 06:33:53.649845 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.649857 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.649870 | orchestrator | 2026-02-14 06:33:53.649883 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:33:53.649896 | orchestrator | Saturday 14 February 2026 06:33:48 +0000 (0:00:01.285) 0:57:00.446 ***** 2026-02-14 06:33:53.649908 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.649920 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:53.649932 | orchestrator | 2026-02-14 06:33:53.649951 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:33:53.649970 | orchestrator | Saturday 14 February 2026 06:33:49 +0000 (0:00:01.251) 0:57:01.697 ***** 2026-02-14 06:33:53.649987 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.650006 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.650115 | orchestrator | 2026-02-14 06:33:53.650129 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:33:53.650140 | orchestrator | Saturday 14 February 2026 06:33:50 +0000 (0:00:01.301) 0:57:02.999 ***** 2026-02-14 06:33:53.650151 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.650162 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:53.650173 | orchestrator | 2026-02-14 06:33:53.650184 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:33:53.650195 | orchestrator | Saturday 14 February 2026 06:33:51 +0000 (0:00:01.212) 0:57:04.212 ***** 2026-02-14 06:33:53.650206 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:33:53.650217 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:33:53.650228 | orchestrator | 2026-02-14 06:33:53.650238 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:33:53.650249 | orchestrator | Saturday 14 February 2026 06:33:53 +0000 (0:00:01.273) 0:57:05.485 ***** 2026-02-14 06:33:53.650263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.650313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}})  2026-02-14 06:33:53.650340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:33:53.650352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}})  2026-02-14 06:33:53.650365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.650377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.650390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:33:53.650402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.650427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}})  2026-02-14 06:33:53.934539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}})  2026-02-14 06:33:53.934549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:33:53.934628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934659 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:53.934669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:53.934679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}})  2026-02-14 06:33:53.934689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:33:53.934717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}})  2026-02-14 06:33:55.056730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.056836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.056854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:33:55.056869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.056881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:33:55.056892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.056944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}})  2026-02-14 06:33:55.056979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}})  2026-02-14 06:33:55.056993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.057009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:33:55.057031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.057097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:33:55.057118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:33:55.310402 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:33:55.310572 | orchestrator | 2026-02-14 06:33:55.310590 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:33:55.310602 | orchestrator | Saturday 14 February 2026 06:33:55 +0000 (0:00:01.888) 0:57:07.374 ***** 2026-02-14 06:33:55.310617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310633 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.310851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.379810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.379907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.379961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.379974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.380001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.380019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.380038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.380074 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.380094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484446 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:33:55.484457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:33:55.484545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:34:25.372340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:34:25.372441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:34:25.372470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:34:25.372480 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372489 | orchestrator | 2026-02-14 06:34:25.372498 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:34:25.372507 | orchestrator | Saturday 14 February 2026 06:33:56 +0000 (0:00:01.520) 0:57:08.894 ***** 2026-02-14 06:34:25.372515 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:34:25.372523 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:34:25.372530 | orchestrator | 2026-02-14 06:34:25.372537 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:34:25.372544 | orchestrator | Saturday 14 February 2026 06:33:58 +0000 (0:00:01.650) 0:57:10.545 ***** 2026-02-14 06:34:25.372552 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:34:25.372559 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:34:25.372566 | orchestrator | 2026-02-14 06:34:25.372573 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:34:25.372580 | orchestrator | Saturday 14 February 2026 06:33:59 +0000 (0:00:01.203) 0:57:11.749 ***** 2026-02-14 06:34:25.372587 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:34:25.372594 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:34:25.372602 | orchestrator | 2026-02-14 06:34:25.372609 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:34:25.372616 | orchestrator | Saturday 14 February 2026 06:34:01 +0000 (0:00:01.625) 0:57:13.375 ***** 2026-02-14 06:34:25.372641 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372649 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372656 | orchestrator | 2026-02-14 06:34:25.372663 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:34:25.372670 | orchestrator | Saturday 14 February 2026 06:34:02 +0000 (0:00:01.260) 0:57:14.635 ***** 2026-02-14 06:34:25.372677 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372684 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372691 | orchestrator | 2026-02-14 06:34:25.372698 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:34:25.372705 | orchestrator | Saturday 14 February 2026 06:34:04 +0000 (0:00:01.823) 0:57:16.459 ***** 2026-02-14 06:34:25.372712 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372720 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372727 | orchestrator | 2026-02-14 06:34:25.372734 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:34:25.372741 | orchestrator | Saturday 14 February 2026 06:34:05 +0000 (0:00:01.315) 0:57:17.774 ***** 2026-02-14 06:34:25.372748 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 06:34:25.372755 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 06:34:25.372762 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 06:34:25.372769 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 06:34:25.372776 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 06:34:25.372783 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 06:34:25.372790 | orchestrator | 2026-02-14 06:34:25.372797 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:34:25.372804 | orchestrator | Saturday 14 February 2026 06:34:07 +0000 (0:00:01.876) 0:57:19.651 ***** 2026-02-14 06:34:25.372826 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:34:25.372834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:34:25.372841 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:34:25.372848 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 06:34:25.372863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 06:34:25.372872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 06:34:25.372880 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372888 | orchestrator | 2026-02-14 06:34:25.372897 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:34:25.372905 | orchestrator | Saturday 14 February 2026 06:34:08 +0000 (0:00:01.319) 0:57:20.970 ***** 2026-02-14 06:34:25.372913 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5, testbed-node-3 2026-02-14 06:34:25.372922 | orchestrator | 2026-02-14 06:34:25.372930 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:34:25.372940 | orchestrator | Saturday 14 February 2026 06:34:09 +0000 (0:00:01.337) 0:57:22.308 ***** 2026-02-14 06:34:25.372948 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372956 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.372965 | orchestrator | 2026-02-14 06:34:25.372973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:34:25.372982 | orchestrator | Saturday 14 February 2026 06:34:11 +0000 (0:00:01.257) 0:57:23.566 ***** 2026-02-14 06:34:25.372990 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.372999 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.373007 | orchestrator | 2026-02-14 06:34:25.373015 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:34:25.373023 | orchestrator | Saturday 14 February 2026 06:34:12 +0000 (0:00:01.687) 0:57:25.253 ***** 2026-02-14 06:34:25.373037 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.373050 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:34:25.373058 | orchestrator | 2026-02-14 06:34:25.373066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:34:25.373095 | orchestrator | Saturday 14 February 2026 06:34:14 +0000 (0:00:01.323) 0:57:26.576 ***** 2026-02-14 06:34:25.373104 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:34:25.373111 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:34:25.373118 | orchestrator | 2026-02-14 06:34:25.373125 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:34:25.373132 | orchestrator | Saturday 14 February 2026 06:34:15 +0000 (0:00:01.430) 0:57:28.006 ***** 2026-02-14 06:34:25.373139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:34:25.373146 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:34:25.373154 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:34:25.373161 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.373168 | orchestrator | 2026-02-14 06:34:25.373175 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:34:25.373182 | orchestrator | Saturday 14 February 2026 06:34:17 +0000 (0:00:01.355) 0:57:29.362 ***** 2026-02-14 06:34:25.373189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:34:25.373196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:34:25.373204 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:34:25.373211 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.373218 | orchestrator | 2026-02-14 06:34:25.373225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:34:25.373232 | orchestrator | Saturday 14 February 2026 06:34:18 +0000 (0:00:01.433) 0:57:30.795 ***** 2026-02-14 06:34:25.373239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:34:25.373246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:34:25.373253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:34:25.373260 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:34:25.373267 | orchestrator | 2026-02-14 06:34:25.373275 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:34:25.373282 | orchestrator | Saturday 14 February 2026 06:34:19 +0000 (0:00:01.399) 0:57:32.194 ***** 2026-02-14 06:34:25.373289 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:34:25.373296 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:34:25.373303 | orchestrator | 2026-02-14 06:34:25.373310 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:34:25.373318 | orchestrator | Saturday 14 February 2026 06:34:21 +0000 (0:00:01.333) 0:57:33.528 ***** 2026-02-14 06:34:25.373325 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:34:25.373332 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:34:25.373339 | orchestrator | 2026-02-14 06:34:25.373346 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:34:25.373353 | orchestrator | Saturday 14 February 2026 06:34:23 +0000 (0:00:01.884) 0:57:35.413 ***** 2026-02-14 06:34:25.373360 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:34:25.373367 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:34:25.373374 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:34:25.373381 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:34:25.373388 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:34:25.373395 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:34:25.373407 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:35:10.348708 | orchestrator | 2026-02-14 06:35:10.348834 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:35:10.348860 | orchestrator | Saturday 14 February 2026 06:34:25 +0000 (0:00:02.259) 0:57:37.672 ***** 2026-02-14 06:35:10.348879 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:35:10.348899 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:35:10.348917 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:35:10.348936 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:35:10.348954 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:35:10.348973 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:35:10.348993 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:35:10.349012 | orchestrator | 2026-02-14 06:35:10.349031 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-14 06:35:10.349049 | orchestrator | Saturday 14 February 2026 06:34:28 +0000 (0:00:02.686) 0:57:40.359 ***** 2026-02-14 06:35:10.349068 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349089 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.349107 | orchestrator | 2026-02-14 06:35:10.349159 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:35:10.349180 | orchestrator | Saturday 14 February 2026 06:34:29 +0000 (0:00:01.330) 0:57:41.689 ***** 2026-02-14 06:35:10.349199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5, testbed-node-3 2026-02-14 06:35:10.349218 | orchestrator | 2026-02-14 06:35:10.349240 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:35:10.349279 | orchestrator | Saturday 14 February 2026 06:34:30 +0000 (0:00:01.241) 0:57:42.931 ***** 2026-02-14 06:35:10.349300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5, testbed-node-3 2026-02-14 06:35:10.349323 | orchestrator | 2026-02-14 06:35:10.349344 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:35:10.349364 | orchestrator | Saturday 14 February 2026 06:34:32 +0000 (0:00:01.436) 0:57:44.368 ***** 2026-02-14 06:35:10.349384 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349405 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.349424 | orchestrator | 2026-02-14 06:35:10.349444 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:35:10.349464 | orchestrator | Saturday 14 February 2026 06:34:33 +0000 (0:00:01.228) 0:57:45.597 ***** 2026-02-14 06:35:10.349485 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.349505 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.349524 | orchestrator | 2026-02-14 06:35:10.349540 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:35:10.349552 | orchestrator | Saturday 14 February 2026 06:34:34 +0000 (0:00:01.674) 0:57:47.271 ***** 2026-02-14 06:35:10.349565 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.349577 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.349589 | orchestrator | 2026-02-14 06:35:10.349600 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:35:10.349611 | orchestrator | Saturday 14 February 2026 06:34:36 +0000 (0:00:01.614) 0:57:48.886 ***** 2026-02-14 06:35:10.349622 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.349632 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.349643 | orchestrator | 2026-02-14 06:35:10.349654 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:35:10.349665 | orchestrator | Saturday 14 February 2026 06:34:38 +0000 (0:00:01.688) 0:57:50.574 ***** 2026-02-14 06:35:10.349675 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349711 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.349722 | orchestrator | 2026-02-14 06:35:10.349734 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:35:10.349745 | orchestrator | Saturday 14 February 2026 06:34:39 +0000 (0:00:01.256) 0:57:51.831 ***** 2026-02-14 06:35:10.349755 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349766 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.349776 | orchestrator | 2026-02-14 06:35:10.349787 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:35:10.349798 | orchestrator | Saturday 14 February 2026 06:34:40 +0000 (0:00:01.258) 0:57:53.089 ***** 2026-02-14 06:35:10.349808 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349819 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.349829 | orchestrator | 2026-02-14 06:35:10.349840 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:35:10.349850 | orchestrator | Saturday 14 February 2026 06:34:42 +0000 (0:00:01.317) 0:57:54.407 ***** 2026-02-14 06:35:10.349861 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.349872 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.349882 | orchestrator | 2026-02-14 06:35:10.349892 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:35:10.349903 | orchestrator | Saturday 14 February 2026 06:34:43 +0000 (0:00:01.837) 0:57:56.245 ***** 2026-02-14 06:35:10.349913 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.349924 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.349934 | orchestrator | 2026-02-14 06:35:10.349945 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:35:10.349955 | orchestrator | Saturday 14 February 2026 06:34:45 +0000 (0:00:01.767) 0:57:58.013 ***** 2026-02-14 06:35:10.349967 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.349987 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350006 | orchestrator | 2026-02-14 06:35:10.350112 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:35:10.350158 | orchestrator | Saturday 14 February 2026 06:34:46 +0000 (0:00:01.230) 0:57:59.243 ***** 2026-02-14 06:35:10.350176 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350221 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350242 | orchestrator | 2026-02-14 06:35:10.350263 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:35:10.350275 | orchestrator | Saturday 14 February 2026 06:34:48 +0000 (0:00:01.274) 0:58:00.518 ***** 2026-02-14 06:35:10.350286 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.350296 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.350307 | orchestrator | 2026-02-14 06:35:10.350318 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:35:10.350328 | orchestrator | Saturday 14 February 2026 06:34:49 +0000 (0:00:01.293) 0:58:01.811 ***** 2026-02-14 06:35:10.350339 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.350350 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.350360 | orchestrator | 2026-02-14 06:35:10.350371 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:35:10.350381 | orchestrator | Saturday 14 February 2026 06:34:50 +0000 (0:00:01.292) 0:58:03.103 ***** 2026-02-14 06:35:10.350392 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.350403 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.350413 | orchestrator | 2026-02-14 06:35:10.350424 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:35:10.350434 | orchestrator | Saturday 14 February 2026 06:34:52 +0000 (0:00:01.576) 0:58:04.680 ***** 2026-02-14 06:35:10.350445 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350456 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350466 | orchestrator | 2026-02-14 06:35:10.350477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:35:10.350488 | orchestrator | Saturday 14 February 2026 06:34:53 +0000 (0:00:01.330) 0:58:06.010 ***** 2026-02-14 06:35:10.350512 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350523 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350534 | orchestrator | 2026-02-14 06:35:10.350545 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:35:10.350555 | orchestrator | Saturday 14 February 2026 06:34:54 +0000 (0:00:01.262) 0:58:07.273 ***** 2026-02-14 06:35:10.350566 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350584 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350596 | orchestrator | 2026-02-14 06:35:10.350606 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:35:10.350617 | orchestrator | Saturday 14 February 2026 06:34:56 +0000 (0:00:01.283) 0:58:08.557 ***** 2026-02-14 06:35:10.350628 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.350638 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.350649 | orchestrator | 2026-02-14 06:35:10.350659 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:35:10.350670 | orchestrator | Saturday 14 February 2026 06:34:57 +0000 (0:00:01.220) 0:58:09.777 ***** 2026-02-14 06:35:10.350681 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:10.350691 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:10.350702 | orchestrator | 2026-02-14 06:35:10.350713 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:35:10.350723 | orchestrator | Saturday 14 February 2026 06:34:58 +0000 (0:00:01.542) 0:58:11.319 ***** 2026-02-14 06:35:10.350734 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350745 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350755 | orchestrator | 2026-02-14 06:35:10.350766 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:35:10.350776 | orchestrator | Saturday 14 February 2026 06:35:00 +0000 (0:00:01.264) 0:58:12.584 ***** 2026-02-14 06:35:10.350787 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350797 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350808 | orchestrator | 2026-02-14 06:35:10.350819 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:35:10.350830 | orchestrator | Saturday 14 February 2026 06:35:01 +0000 (0:00:01.259) 0:58:13.843 ***** 2026-02-14 06:35:10.350840 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350851 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350862 | orchestrator | 2026-02-14 06:35:10.350872 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:35:10.350883 | orchestrator | Saturday 14 February 2026 06:35:02 +0000 (0:00:01.297) 0:58:15.141 ***** 2026-02-14 06:35:10.350894 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350904 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350915 | orchestrator | 2026-02-14 06:35:10.350926 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:35:10.350936 | orchestrator | Saturday 14 February 2026 06:35:04 +0000 (0:00:01.234) 0:58:16.376 ***** 2026-02-14 06:35:10.350947 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.350958 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.350968 | orchestrator | 2026-02-14 06:35:10.350979 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:35:10.350990 | orchestrator | Saturday 14 February 2026 06:35:05 +0000 (0:00:01.431) 0:58:17.808 ***** 2026-02-14 06:35:10.351000 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.351011 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.351021 | orchestrator | 2026-02-14 06:35:10.351032 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:35:10.351043 | orchestrator | Saturday 14 February 2026 06:35:06 +0000 (0:00:01.212) 0:58:19.020 ***** 2026-02-14 06:35:10.351053 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.351064 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.351075 | orchestrator | 2026-02-14 06:35:10.351085 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:35:10.351102 | orchestrator | Saturday 14 February 2026 06:35:07 +0000 (0:00:01.179) 0:58:20.200 ***** 2026-02-14 06:35:10.351113 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.351162 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.351173 | orchestrator | 2026-02-14 06:35:10.351184 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:35:10.351195 | orchestrator | Saturday 14 February 2026 06:35:09 +0000 (0:00:01.242) 0:58:21.442 ***** 2026-02-14 06:35:10.351205 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:10.351216 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:10.351227 | orchestrator | 2026-02-14 06:35:10.351245 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:35:56.349142 | orchestrator | Saturday 14 February 2026 06:35:10 +0000 (0:00:01.217) 0:58:22.660 ***** 2026-02-14 06:35:56.349313 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349329 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349338 | orchestrator | 2026-02-14 06:35:56.349348 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:35:56.349356 | orchestrator | Saturday 14 February 2026 06:35:11 +0000 (0:00:01.192) 0:58:23.853 ***** 2026-02-14 06:35:56.349364 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349373 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349382 | orchestrator | 2026-02-14 06:35:56.349390 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:35:56.349399 | orchestrator | Saturday 14 February 2026 06:35:12 +0000 (0:00:01.269) 0:58:25.122 ***** 2026-02-14 06:35:56.349407 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349416 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349424 | orchestrator | 2026-02-14 06:35:56.349433 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:35:56.349442 | orchestrator | Saturday 14 February 2026 06:35:14 +0000 (0:00:01.421) 0:58:26.544 ***** 2026-02-14 06:35:56.349451 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.349461 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.349470 | orchestrator | 2026-02-14 06:35:56.349478 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:35:56.349487 | orchestrator | Saturday 14 February 2026 06:35:16 +0000 (0:00:02.552) 0:58:29.096 ***** 2026-02-14 06:35:56.349497 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.349505 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.349514 | orchestrator | 2026-02-14 06:35:56.349522 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:35:56.349530 | orchestrator | Saturday 14 February 2026 06:35:19 +0000 (0:00:02.411) 0:58:31.508 ***** 2026-02-14 06:35:56.349539 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5, testbed-node-3 2026-02-14 06:35:56.349548 | orchestrator | 2026-02-14 06:35:56.349573 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:35:56.349582 | orchestrator | Saturday 14 February 2026 06:35:20 +0000 (0:00:01.235) 0:58:32.744 ***** 2026-02-14 06:35:56.349591 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349600 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349608 | orchestrator | 2026-02-14 06:35:56.349617 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:35:56.349625 | orchestrator | Saturday 14 February 2026 06:35:21 +0000 (0:00:01.211) 0:58:33.955 ***** 2026-02-14 06:35:56.349633 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349641 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349650 | orchestrator | 2026-02-14 06:35:56.349658 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:35:56.349667 | orchestrator | Saturday 14 February 2026 06:35:22 +0000 (0:00:01.232) 0:58:35.187 ***** 2026-02-14 06:35:56.349674 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:35:56.349683 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:35:56.349717 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:35:56.349726 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:35:56.349734 | orchestrator | 2026-02-14 06:35:56.349742 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:35:56.349751 | orchestrator | Saturday 14 February 2026 06:35:24 +0000 (0:00:01.963) 0:58:37.151 ***** 2026-02-14 06:35:56.349759 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.349768 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.349776 | orchestrator | 2026-02-14 06:35:56.349784 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:35:56.349793 | orchestrator | Saturday 14 February 2026 06:35:26 +0000 (0:00:01.579) 0:58:38.731 ***** 2026-02-14 06:35:56.349802 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349811 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349818 | orchestrator | 2026-02-14 06:35:56.349827 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:35:56.349836 | orchestrator | Saturday 14 February 2026 06:35:27 +0000 (0:00:01.298) 0:58:40.029 ***** 2026-02-14 06:35:56.349844 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349853 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349861 | orchestrator | 2026-02-14 06:35:56.349870 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:35:56.349878 | orchestrator | Saturday 14 February 2026 06:35:29 +0000 (0:00:01.413) 0:58:41.443 ***** 2026-02-14 06:35:56.349886 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.349894 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.349903 | orchestrator | 2026-02-14 06:35:56.349911 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:35:56.349920 | orchestrator | Saturday 14 February 2026 06:35:30 +0000 (0:00:01.280) 0:58:42.723 ***** 2026-02-14 06:35:56.349928 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5, testbed-node-3 2026-02-14 06:35:56.349936 | orchestrator | 2026-02-14 06:35:56.349944 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:35:56.349953 | orchestrator | Saturday 14 February 2026 06:35:31 +0000 (0:00:01.219) 0:58:43.942 ***** 2026-02-14 06:35:56.349961 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.349970 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.349978 | orchestrator | 2026-02-14 06:35:56.349987 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:35:56.349995 | orchestrator | Saturday 14 February 2026 06:35:33 +0000 (0:00:02.245) 0:58:46.188 ***** 2026-02-14 06:35:56.350004 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:35:56.350088 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:35:56.350100 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:35:56.350110 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350119 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:35:56.350128 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:35:56.350138 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:35:56.350145 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350153 | orchestrator | 2026-02-14 06:35:56.350180 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:35:56.350188 | orchestrator | Saturday 14 February 2026 06:35:35 +0000 (0:00:01.351) 0:58:47.539 ***** 2026-02-14 06:35:56.350196 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350204 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350211 | orchestrator | 2026-02-14 06:35:56.350231 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:35:56.350238 | orchestrator | Saturday 14 February 2026 06:35:36 +0000 (0:00:01.280) 0:58:48.820 ***** 2026-02-14 06:35:56.350246 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350254 | orchestrator | 2026-02-14 06:35:56.350262 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:35:56.350270 | orchestrator | Saturday 14 February 2026 06:35:37 +0000 (0:00:01.192) 0:58:50.012 ***** 2026-02-14 06:35:56.350277 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350285 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350294 | orchestrator | 2026-02-14 06:35:56.350302 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:35:56.350310 | orchestrator | Saturday 14 February 2026 06:35:39 +0000 (0:00:01.341) 0:58:51.354 ***** 2026-02-14 06:35:56.350317 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350325 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350333 | orchestrator | 2026-02-14 06:35:56.350349 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:35:56.350357 | orchestrator | Saturday 14 February 2026 06:35:40 +0000 (0:00:01.323) 0:58:52.678 ***** 2026-02-14 06:35:56.350366 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350375 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350384 | orchestrator | 2026-02-14 06:35:56.350393 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:35:56.350401 | orchestrator | Saturday 14 February 2026 06:35:41 +0000 (0:00:01.280) 0:58:53.959 ***** 2026-02-14 06:35:56.350409 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.350418 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.350427 | orchestrator | 2026-02-14 06:35:56.350435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:35:56.350444 | orchestrator | Saturday 14 February 2026 06:35:44 +0000 (0:00:02.838) 0:58:56.798 ***** 2026-02-14 06:35:56.350452 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:35:56.350461 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:35:56.350469 | orchestrator | 2026-02-14 06:35:56.350477 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:35:56.350485 | orchestrator | Saturday 14 February 2026 06:35:45 +0000 (0:00:01.336) 0:58:58.135 ***** 2026-02-14 06:35:56.350493 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5, testbed-node-3 2026-02-14 06:35:56.350502 | orchestrator | 2026-02-14 06:35:56.350511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:35:56.350520 | orchestrator | Saturday 14 February 2026 06:35:47 +0000 (0:00:01.223) 0:58:59.358 ***** 2026-02-14 06:35:56.350529 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350537 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350545 | orchestrator | 2026-02-14 06:35:56.350553 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:35:56.350574 | orchestrator | Saturday 14 February 2026 06:35:48 +0000 (0:00:01.260) 0:59:00.619 ***** 2026-02-14 06:35:56.350583 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350601 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350609 | orchestrator | 2026-02-14 06:35:56.350618 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:35:56.350625 | orchestrator | Saturday 14 February 2026 06:35:49 +0000 (0:00:01.211) 0:59:01.832 ***** 2026-02-14 06:35:56.350634 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350643 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350652 | orchestrator | 2026-02-14 06:35:56.350660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:35:56.350668 | orchestrator | Saturday 14 February 2026 06:35:50 +0000 (0:00:01.261) 0:59:03.094 ***** 2026-02-14 06:35:56.350676 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350684 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350701 | orchestrator | 2026-02-14 06:35:56.350710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:35:56.350719 | orchestrator | Saturday 14 February 2026 06:35:52 +0000 (0:00:01.652) 0:59:04.747 ***** 2026-02-14 06:35:56.350727 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350736 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350744 | orchestrator | 2026-02-14 06:35:56.350752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:35:56.350761 | orchestrator | Saturday 14 February 2026 06:35:53 +0000 (0:00:01.286) 0:59:06.034 ***** 2026-02-14 06:35:56.350770 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350778 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350787 | orchestrator | 2026-02-14 06:35:56.350795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:35:56.350803 | orchestrator | Saturday 14 February 2026 06:35:54 +0000 (0:00:01.275) 0:59:07.310 ***** 2026-02-14 06:35:56.350811 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:35:56.350819 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:35:56.350827 | orchestrator | 2026-02-14 06:35:56.350846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:36:37.715057 | orchestrator | Saturday 14 February 2026 06:35:56 +0000 (0:00:01.348) 0:59:08.658 ***** 2026-02-14 06:36:37.715177 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.715280 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.715297 | orchestrator | 2026-02-14 06:36:37.715309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:36:37.715321 | orchestrator | Saturday 14 February 2026 06:35:57 +0000 (0:00:01.277) 0:59:09.935 ***** 2026-02-14 06:36:37.715332 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:36:37.715344 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:36:37.715355 | orchestrator | 2026-02-14 06:36:37.715366 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:36:37.715377 | orchestrator | Saturday 14 February 2026 06:35:59 +0000 (0:00:01.568) 0:59:11.503 ***** 2026-02-14 06:36:37.715388 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5, testbed-node-3 2026-02-14 06:36:37.715399 | orchestrator | 2026-02-14 06:36:37.715410 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:36:37.715421 | orchestrator | Saturday 14 February 2026 06:36:00 +0000 (0:00:01.256) 0:59:12.760 ***** 2026-02-14 06:36:37.715432 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-14 06:36:37.715443 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-14 06:36:37.715454 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-14 06:36:37.715464 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-14 06:36:37.715475 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-14 06:36:37.715486 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-14 06:36:37.715496 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-14 06:36:37.715507 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-14 06:36:37.715517 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-14 06:36:37.715544 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-14 06:36:37.715555 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-14 06:36:37.715565 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-14 06:36:37.715576 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-14 06:36:37.715587 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-14 06:36:37.715598 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:36:37.715609 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:36:37.715619 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:36:37.715653 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:36:37.715664 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:36:37.715675 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:36:37.715686 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:36:37.715696 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:36:37.715707 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:36:37.715717 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:36:37.715728 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:36:37.715738 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:36:37.715749 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:36:37.715759 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:36:37.715770 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-14 06:36:37.715781 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-14 06:36:37.715792 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-14 06:36:37.715802 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-14 06:36:37.715813 | orchestrator | 2026-02-14 06:36:37.715824 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:36:37.715834 | orchestrator | Saturday 14 February 2026 06:36:07 +0000 (0:00:06.655) 0:59:19.416 ***** 2026-02-14 06:36:37.715845 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5, testbed-node-3 2026-02-14 06:36:37.715856 | orchestrator | 2026-02-14 06:36:37.715867 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:36:37.715877 | orchestrator | Saturday 14 February 2026 06:36:08 +0000 (0:00:01.222) 0:59:20.639 ***** 2026-02-14 06:36:37.715889 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.715901 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.715912 | orchestrator | 2026-02-14 06:36:37.715923 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:36:37.715934 | orchestrator | Saturday 14 February 2026 06:36:09 +0000 (0:00:01.606) 0:59:22.246 ***** 2026-02-14 06:36:37.715944 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.715955 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.715966 | orchestrator | 2026-02-14 06:36:37.715977 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:36:37.716007 | orchestrator | Saturday 14 February 2026 06:36:12 +0000 (0:00:02.435) 0:59:24.681 ***** 2026-02-14 06:36:37.716020 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716031 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716041 | orchestrator | 2026-02-14 06:36:37.716052 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:36:37.716063 | orchestrator | Saturday 14 February 2026 06:36:13 +0000 (0:00:01.348) 0:59:26.029 ***** 2026-02-14 06:36:37.716073 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716084 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716095 | orchestrator | 2026-02-14 06:36:37.716106 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:36:37.716116 | orchestrator | Saturday 14 February 2026 06:36:14 +0000 (0:00:01.278) 0:59:27.307 ***** 2026-02-14 06:36:37.716127 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716138 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716156 | orchestrator | 2026-02-14 06:36:37.716167 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:36:37.716178 | orchestrator | Saturday 14 February 2026 06:36:16 +0000 (0:00:01.246) 0:59:28.554 ***** 2026-02-14 06:36:37.716189 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716221 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716232 | orchestrator | 2026-02-14 06:36:37.716260 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:36:37.716271 | orchestrator | Saturday 14 February 2026 06:36:17 +0000 (0:00:01.318) 0:59:29.872 ***** 2026-02-14 06:36:37.716282 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716292 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716406 | orchestrator | 2026-02-14 06:36:37.716418 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:36:37.716430 | orchestrator | Saturday 14 February 2026 06:36:18 +0000 (0:00:01.303) 0:59:31.176 ***** 2026-02-14 06:36:37.716440 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716451 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716462 | orchestrator | 2026-02-14 06:36:37.716494 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:36:37.716506 | orchestrator | Saturday 14 February 2026 06:36:20 +0000 (0:00:01.264) 0:59:32.440 ***** 2026-02-14 06:36:37.716517 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716527 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716571 | orchestrator | 2026-02-14 06:36:37.716584 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:36:37.716595 | orchestrator | Saturday 14 February 2026 06:36:21 +0000 (0:00:01.658) 0:59:34.099 ***** 2026-02-14 06:36:37.716605 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716616 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716636 | orchestrator | 2026-02-14 06:36:37.716648 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:36:37.716658 | orchestrator | Saturday 14 February 2026 06:36:23 +0000 (0:00:01.261) 0:59:35.361 ***** 2026-02-14 06:36:37.716669 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716680 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716690 | orchestrator | 2026-02-14 06:36:37.716739 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:36:37.716753 | orchestrator | Saturday 14 February 2026 06:36:24 +0000 (0:00:01.239) 0:59:36.601 ***** 2026-02-14 06:36:37.716795 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716903 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716915 | orchestrator | 2026-02-14 06:36:37.716925 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:36:37.716936 | orchestrator | Saturday 14 February 2026 06:36:25 +0000 (0:00:01.309) 0:59:37.911 ***** 2026-02-14 06:36:37.716947 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:36:37.716957 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:36:37.716968 | orchestrator | 2026-02-14 06:36:37.716993 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:36:37.717018 | orchestrator | Saturday 14 February 2026 06:36:26 +0000 (0:00:01.260) 0:59:39.171 ***** 2026-02-14 06:36:37.717030 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:36:37.717040 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:36:37.717051 | orchestrator | 2026-02-14 06:36:37.717062 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:36:37.717073 | orchestrator | Saturday 14 February 2026 06:36:31 +0000 (0:00:04.524) 0:59:43.695 ***** 2026-02-14 06:36:37.717083 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.717094 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:36:37.717115 | orchestrator | 2026-02-14 06:36:37.717126 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:36:37.717137 | orchestrator | Saturday 14 February 2026 06:36:32 +0000 (0:00:01.361) 0:59:45.057 ***** 2026-02-14 06:36:37.717150 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-14 06:36:37.717174 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-14 06:37:26.351932 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-14 06:37:26.352075 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-14 06:37:26.352103 | orchestrator | 2026-02-14 06:37:26.352123 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:37:26.352144 | orchestrator | Saturday 14 February 2026 06:36:37 +0000 (0:00:04.969) 0:59:50.026 ***** 2026-02-14 06:37:26.352163 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352183 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.352203 | orchestrator | 2026-02-14 06:37:26.352222 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:37:26.352300 | orchestrator | Saturday 14 February 2026 06:36:39 +0000 (0:00:01.316) 0:59:51.343 ***** 2026-02-14 06:37:26.352319 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352332 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.352343 | orchestrator | 2026-02-14 06:37:26.352371 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:37:26.352385 | orchestrator | Saturday 14 February 2026 06:36:40 +0000 (0:00:01.222) 0:59:52.566 ***** 2026-02-14 06:37:26.352396 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352407 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.352418 | orchestrator | 2026-02-14 06:37:26.352429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:37:26.352442 | orchestrator | Saturday 14 February 2026 06:36:41 +0000 (0:00:01.279) 0:59:53.845 ***** 2026-02-14 06:37:26.352454 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352467 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.352479 | orchestrator | 2026-02-14 06:37:26.352492 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:37:26.352504 | orchestrator | Saturday 14 February 2026 06:36:42 +0000 (0:00:01.283) 0:59:55.129 ***** 2026-02-14 06:37:26.352517 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352530 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.352545 | orchestrator | 2026-02-14 06:37:26.352563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:37:26.352584 | orchestrator | Saturday 14 February 2026 06:36:44 +0000 (0:00:01.288) 0:59:56.418 ***** 2026-02-14 06:37:26.352644 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.352664 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.352680 | orchestrator | 2026-02-14 06:37:26.352696 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:37:26.352713 | orchestrator | Saturday 14 February 2026 06:36:45 +0000 (0:00:01.713) 0:59:58.132 ***** 2026-02-14 06:37:26.352731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:37:26.352748 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:37:26.352767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:37:26.352785 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352804 | orchestrator | 2026-02-14 06:37:26.352822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:37:26.352840 | orchestrator | Saturday 14 February 2026 06:36:47 +0000 (0:00:01.443) 0:59:59.575 ***** 2026-02-14 06:37:26.352858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:37:26.352876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:37:26.352894 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:37:26.352912 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352924 | orchestrator | 2026-02-14 06:37:26.352934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:37:26.352945 | orchestrator | Saturday 14 February 2026 06:36:48 +0000 (0:00:01.461) 1:00:01.037 ***** 2026-02-14 06:37:26.352955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:37:26.352966 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:37:26.352976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:37:26.352987 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.352998 | orchestrator | 2026-02-14 06:37:26.353008 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:37:26.353019 | orchestrator | Saturday 14 February 2026 06:36:50 +0000 (0:00:01.441) 1:00:02.478 ***** 2026-02-14 06:37:26.353029 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.353040 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.353051 | orchestrator | 2026-02-14 06:37:26.353062 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:37:26.353072 | orchestrator | Saturday 14 February 2026 06:36:51 +0000 (0:00:01.339) 1:00:03.817 ***** 2026-02-14 06:37:26.353083 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:37:26.353094 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:37:26.353104 | orchestrator | 2026-02-14 06:37:26.353115 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:37:26.353126 | orchestrator | Saturday 14 February 2026 06:36:53 +0000 (0:00:01.520) 1:00:05.338 ***** 2026-02-14 06:37:26.353137 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.353147 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.353158 | orchestrator | 2026-02-14 06:37:26.353190 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-14 06:37:26.353202 | orchestrator | Saturday 14 February 2026 06:36:55 +0000 (0:00:02.096) 1:00:07.435 ***** 2026-02-14 06:37:26.353213 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.353223 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.353234 | orchestrator | 2026-02-14 06:37:26.353274 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-14 06:37:26.353286 | orchestrator | Saturday 14 February 2026 06:36:56 +0000 (0:00:01.361) 1:00:08.796 ***** 2026-02-14 06:37:26.353296 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5, testbed-node-3 2026-02-14 06:37:26.353309 | orchestrator | 2026-02-14 06:37:26.353319 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-14 06:37:26.353334 | orchestrator | Saturday 14 February 2026 06:36:57 +0000 (0:00:01.253) 1:00:10.050 ***** 2026-02-14 06:37:26.353352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 06:37:26.353407 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-14 06:37:26.353430 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-14 06:37:26.353448 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-14 06:37:26.353466 | orchestrator | 2026-02-14 06:37:26.353483 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-14 06:37:26.353500 | orchestrator | Saturday 14 February 2026 06:36:59 +0000 (0:00:01.964) 1:00:12.015 ***** 2026-02-14 06:37:26.353519 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:37:26.353537 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:37:26.353565 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:37:26.353583 | orchestrator | 2026-02-14 06:37:26.353601 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:37:26.353618 | orchestrator | Saturday 14 February 2026 06:37:02 +0000 (0:00:03.106) 1:00:15.121 ***** 2026-02-14 06:37:26.353637 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-14 06:37:26.353656 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:37:26.353675 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.353694 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-14 06:37:26.353708 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 06:37:26.353719 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.353730 | orchestrator | 2026-02-14 06:37:26.353741 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-14 06:37:26.353751 | orchestrator | Saturday 14 February 2026 06:37:04 +0000 (0:00:02.098) 1:00:17.220 ***** 2026-02-14 06:37:26.353762 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.353773 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.353784 | orchestrator | 2026-02-14 06:37:26.353794 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-14 06:37:26.353805 | orchestrator | Saturday 14 February 2026 06:37:06 +0000 (0:00:02.033) 1:00:19.254 ***** 2026-02-14 06:37:26.353816 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.353826 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:26.353837 | orchestrator | 2026-02-14 06:37:26.353848 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-14 06:37:26.353858 | orchestrator | Saturday 14 February 2026 06:37:08 +0000 (0:00:01.265) 1:00:20.519 ***** 2026-02-14 06:37:26.353869 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-3 2026-02-14 06:37:26.353880 | orchestrator | 2026-02-14 06:37:26.353891 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-14 06:37:26.353902 | orchestrator | Saturday 14 February 2026 06:37:09 +0000 (0:00:01.231) 1:00:21.751 ***** 2026-02-14 06:37:26.353912 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5, testbed-node-3 2026-02-14 06:37:26.353923 | orchestrator | 2026-02-14 06:37:26.353933 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-14 06:37:26.353957 | orchestrator | Saturday 14 February 2026 06:37:10 +0000 (0:00:01.304) 1:00:23.056 ***** 2026-02-14 06:37:26.353968 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.353979 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.353990 | orchestrator | 2026-02-14 06:37:26.354000 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-14 06:37:26.354012 | orchestrator | Saturday 14 February 2026 06:37:12 +0000 (0:00:02.102) 1:00:25.158 ***** 2026-02-14 06:37:26.354198 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.354219 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.354260 | orchestrator | 2026-02-14 06:37:26.354272 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-14 06:37:26.354283 | orchestrator | Saturday 14 February 2026 06:37:15 +0000 (0:00:02.404) 1:00:27.563 ***** 2026-02-14 06:37:26.354306 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.354317 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.354328 | orchestrator | 2026-02-14 06:37:26.354338 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-14 06:37:26.354349 | orchestrator | Saturday 14 February 2026 06:37:17 +0000 (0:00:02.324) 1:00:29.888 ***** 2026-02-14 06:37:26.354360 | orchestrator | changed: [testbed-node-5] 2026-02-14 06:37:26.354370 | orchestrator | changed: [testbed-node-3] 2026-02-14 06:37:26.354381 | orchestrator | 2026-02-14 06:37:26.354392 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-14 06:37:26.354402 | orchestrator | Saturday 14 February 2026 06:37:21 +0000 (0:00:03.606) 1:00:33.495 ***** 2026-02-14 06:37:26.354413 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:37:26.354424 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:26.354434 | orchestrator | 2026-02-14 06:37:26.354445 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-14 06:37:26.354456 | orchestrator | Saturday 14 February 2026 06:37:22 +0000 (0:00:01.733) 1:00:35.228 ***** 2026-02-14 06:37:26.354466 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:37:26.354491 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:37:50.558847 | orchestrator | 2026-02-14 06:37:50.558962 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-14 06:37:50.558977 | orchestrator | 2026-02-14 06:37:50.558988 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:37:50.558999 | orchestrator | Saturday 14 February 2026 06:37:26 +0000 (0:00:03.429) 1:00:38.658 ***** 2026-02-14 06:37:50.559009 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-14 06:37:50.559018 | orchestrator | 2026-02-14 06:37:50.559028 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:37:50.559037 | orchestrator | Saturday 14 February 2026 06:37:27 +0000 (0:00:01.467) 1:00:40.125 ***** 2026-02-14 06:37:50.559047 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559058 | orchestrator | 2026-02-14 06:37:50.559068 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:37:50.559078 | orchestrator | Saturday 14 February 2026 06:37:29 +0000 (0:00:01.452) 1:00:41.578 ***** 2026-02-14 06:37:50.559087 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559098 | orchestrator | 2026-02-14 06:37:50.559115 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:37:50.559131 | orchestrator | Saturday 14 February 2026 06:37:30 +0000 (0:00:01.214) 1:00:42.793 ***** 2026-02-14 06:37:50.559183 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559202 | orchestrator | 2026-02-14 06:37:50.559213 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:37:50.559222 | orchestrator | Saturday 14 February 2026 06:37:31 +0000 (0:00:01.440) 1:00:44.234 ***** 2026-02-14 06:37:50.559232 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559241 | orchestrator | 2026-02-14 06:37:50.559251 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:37:50.559319 | orchestrator | Saturday 14 February 2026 06:37:33 +0000 (0:00:01.121) 1:00:45.355 ***** 2026-02-14 06:37:50.559331 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559341 | orchestrator | 2026-02-14 06:37:50.559351 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:37:50.559361 | orchestrator | Saturday 14 February 2026 06:37:34 +0000 (0:00:01.209) 1:00:46.565 ***** 2026-02-14 06:37:50.559373 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559384 | orchestrator | 2026-02-14 06:37:50.559395 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:37:50.559406 | orchestrator | Saturday 14 February 2026 06:37:35 +0000 (0:00:01.212) 1:00:47.777 ***** 2026-02-14 06:37:50.559418 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:50.559429 | orchestrator | 2026-02-14 06:37:50.559441 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:37:50.559477 | orchestrator | Saturday 14 February 2026 06:37:36 +0000 (0:00:01.192) 1:00:48.969 ***** 2026-02-14 06:37:50.559495 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559512 | orchestrator | 2026-02-14 06:37:50.559528 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:37:50.559545 | orchestrator | Saturday 14 February 2026 06:37:37 +0000 (0:00:01.177) 1:00:50.147 ***** 2026-02-14 06:37:50.559563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:37:50.559581 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:37:50.559598 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:37:50.559611 | orchestrator | 2026-02-14 06:37:50.559622 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:37:50.559632 | orchestrator | Saturday 14 February 2026 06:37:39 +0000 (0:00:02.079) 1:00:52.227 ***** 2026-02-14 06:37:50.559643 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:37:50.559654 | orchestrator | 2026-02-14 06:37:50.559665 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:37:50.559675 | orchestrator | Saturday 14 February 2026 06:37:41 +0000 (0:00:01.265) 1:00:53.493 ***** 2026-02-14 06:37:50.559686 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:37:50.559697 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:37:50.559708 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:37:50.559720 | orchestrator | 2026-02-14 06:37:50.559730 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:37:50.559739 | orchestrator | Saturday 14 February 2026 06:37:44 +0000 (0:00:03.412) 1:00:56.906 ***** 2026-02-14 06:37:50.559749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 06:37:50.559759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 06:37:50.559769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 06:37:50.559778 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:50.559788 | orchestrator | 2026-02-14 06:37:50.559797 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:37:50.559807 | orchestrator | Saturday 14 February 2026 06:37:46 +0000 (0:00:01.879) 1:00:58.785 ***** 2026-02-14 06:37:50.559818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.559832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.559872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.559891 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:50.559908 | orchestrator | 2026-02-14 06:37:50.559926 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:37:50.559942 | orchestrator | Saturday 14 February 2026 06:37:48 +0000 (0:00:01.668) 1:01:00.454 ***** 2026-02-14 06:37:50.559961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.559985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.560002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:37:50.560012 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:37:50.560022 | orchestrator | 2026-02-14 06:37:50.560031 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:37:50.560041 | orchestrator | Saturday 14 February 2026 06:37:49 +0000 (0:00:01.183) 1:01:01.637 ***** 2026-02-14 06:37:50.560053 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:37:42.075796', 'end': '2026-02-14 06:37:42.128713', 'delta': '0:00:00.052917', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:37:50.560065 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:37:42.683650', 'end': '2026-02-14 06:37:42.733455', 'delta': '0:00:00.049805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:37:50.560076 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:37:43.305514', 'end': '2026-02-14 06:37:43.353868', 'delta': '0:00:00.048354', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:37:50.560086 | orchestrator | 2026-02-14 06:37:50.560104 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:38:08.407351 | orchestrator | Saturday 14 February 2026 06:37:50 +0000 (0:00:01.235) 1:01:02.873 ***** 2026-02-14 06:38:08.407468 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.407486 | orchestrator | 2026-02-14 06:38:08.407499 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:38:08.407536 | orchestrator | Saturday 14 February 2026 06:37:51 +0000 (0:00:01.257) 1:01:04.130 ***** 2026-02-14 06:38:08.407548 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.407560 | orchestrator | 2026-02-14 06:38:08.407571 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:38:08.407582 | orchestrator | Saturday 14 February 2026 06:37:53 +0000 (0:00:01.241) 1:01:05.371 ***** 2026-02-14 06:38:08.407593 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.407604 | orchestrator | 2026-02-14 06:38:08.407614 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:38:08.407625 | orchestrator | Saturday 14 February 2026 06:37:54 +0000 (0:00:01.138) 1:01:06.509 ***** 2026-02-14 06:38:08.407636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:38:08.407647 | orchestrator | 2026-02-14 06:38:08.407658 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:38:08.407669 | orchestrator | Saturday 14 February 2026 06:37:56 +0000 (0:00:01.988) 1:01:08.498 ***** 2026-02-14 06:38:08.407679 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.407690 | orchestrator | 2026-02-14 06:38:08.407701 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:38:08.407712 | orchestrator | Saturday 14 February 2026 06:37:57 +0000 (0:00:01.136) 1:01:09.635 ***** 2026-02-14 06:38:08.407723 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.407733 | orchestrator | 2026-02-14 06:38:08.407759 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:38:08.407771 | orchestrator | Saturday 14 February 2026 06:37:58 +0000 (0:00:01.177) 1:01:10.813 ***** 2026-02-14 06:38:08.407781 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.407792 | orchestrator | 2026-02-14 06:38:08.407803 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:38:08.407814 | orchestrator | Saturday 14 February 2026 06:37:59 +0000 (0:00:01.303) 1:01:12.117 ***** 2026-02-14 06:38:08.407824 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.407835 | orchestrator | 2026-02-14 06:38:08.407851 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:38:08.407871 | orchestrator | Saturday 14 February 2026 06:38:00 +0000 (0:00:01.138) 1:01:13.256 ***** 2026-02-14 06:38:08.407890 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.407909 | orchestrator | 2026-02-14 06:38:08.407946 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:38:08.407967 | orchestrator | Saturday 14 February 2026 06:38:02 +0000 (0:00:01.170) 1:01:14.426 ***** 2026-02-14 06:38:08.407999 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.408018 | orchestrator | 2026-02-14 06:38:08.408030 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:38:08.408041 | orchestrator | Saturday 14 February 2026 06:38:03 +0000 (0:00:01.273) 1:01:15.700 ***** 2026-02-14 06:38:08.408051 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.408062 | orchestrator | 2026-02-14 06:38:08.408073 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:38:08.408084 | orchestrator | Saturday 14 February 2026 06:38:04 +0000 (0:00:01.119) 1:01:16.820 ***** 2026-02-14 06:38:08.408094 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.408105 | orchestrator | 2026-02-14 06:38:08.408116 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:38:08.408127 | orchestrator | Saturday 14 February 2026 06:38:05 +0000 (0:00:01.237) 1:01:18.058 ***** 2026-02-14 06:38:08.408137 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:08.408148 | orchestrator | 2026-02-14 06:38:08.408158 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:38:08.408170 | orchestrator | Saturday 14 February 2026 06:38:06 +0000 (0:00:01.214) 1:01:19.273 ***** 2026-02-14 06:38:08.408181 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:08.408191 | orchestrator | 2026-02-14 06:38:08.408202 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:38:08.408225 | orchestrator | Saturday 14 February 2026 06:38:08 +0000 (0:00:01.175) 1:01:20.449 ***** 2026-02-14 06:38:08.408238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:08.408255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}})  2026-02-14 06:38:08.408325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:38:08.408347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}})  2026-02-14 06:38:08.408360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:08.408372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:08.408384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:38:08.408408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:08.408435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:38:08.408472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:09.818788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}})  2026-02-14 06:38:09.818934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}})  2026-02-14 06:38:09.818963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:09.818992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:38:09.819069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:09.819089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:38:09.819116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:38:09.819133 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:09.819151 | orchestrator | 2026-02-14 06:38:09.819168 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:38:09.819185 | orchestrator | Saturday 14 February 2026 06:38:09 +0000 (0:00:01.467) 1:01:21.916 ***** 2026-02-14 06:38:09.819205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:09.819236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6', 'dm-uuid-LVM-y8TFd42k7h3tskYaBmVU96eirAODLPPWLm3s7r1uHf3qd9eZ715af0u59pi4vRGe'], 'uuids': ['6378402a-7c1c-407a-be8c-200236570708'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:09.819254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025', 'scsi-SQEMU_QEMU_HARDDISK_8657c064-423f-4604-b6db-e42322d0b025'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8657c064', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:09.819332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D7g0SF-SeWa-7MSU-rwcF-cnTN-mPuF-kfA0YK', 'scsi-0QEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491', 'scsi-SQEMU_QEMU_HARDDISK_763dae4f-8aba-40cd-b4e7-eeabad093491'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.019907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.019998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS', 'dm-uuid-CRYPT-LUKS2-254c5794787a438987c7d5772aa30a89-Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d74a1ea4--c27e--5375--be56--9d9a8e069fa6-osd--block--d74a1ea4--c27e--5375--be56--9d9a8e069fa6', 'dm-uuid-LVM-bsT5DZ8cw32sKmXOfJetQqGU0HxblzT0Oj0FlQ0hDfJ2MaenWm21pneMRY3n5AFS'], 'uuids': ['254c5794-787a-4389-87c7-d5772aa30a89'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '763dae4f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Oj0FlQ-0hDf-J2Ma-enWm-21pn-eMRY-3n5AFS']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020106 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc2pXT-2pSW-cOnk-GYPm-BmdS-2yWK-CLqXT7', 'scsi-0QEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8', 'scsi-SQEMU_QEMU_HARDDISK_2ec12fdb-ec43-4dc2-9206-4086e60213b8'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ec12fdb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--86d1df08--738c--52e0--accb--8c0a21213af6-osd--block--86d1df08--738c--52e0--accb--8c0a21213af6']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:11.020147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01a64ec0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1', 'scsi-SQEMU_QEMU_HARDDISK_01a64ec0-40ea-433b-abd1-e8b343921bd2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:40.247965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:40.248148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:40.248181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe', 'dm-uuid-CRYPT-LUKS2-6378402a7c1c407abe8c200236570708-Lm3s7r-1uHf-3qd9-eZ71-5af0-u59p-i4vRGe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:38:40.248194 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248207 | orchestrator | 2026-02-14 06:38:40.248218 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:38:40.248228 | orchestrator | Saturday 14 February 2026 06:38:11 +0000 (0:00:01.419) 1:01:23.336 ***** 2026-02-14 06:38:40.248238 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:40.248249 | orchestrator | 2026-02-14 06:38:40.248259 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:38:40.248269 | orchestrator | Saturday 14 February 2026 06:38:12 +0000 (0:00:01.530) 1:01:24.867 ***** 2026-02-14 06:38:40.248278 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:40.248287 | orchestrator | 2026-02-14 06:38:40.248297 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:38:40.248339 | orchestrator | Saturday 14 February 2026 06:38:13 +0000 (0:00:01.166) 1:01:26.033 ***** 2026-02-14 06:38:40.248349 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:40.248359 | orchestrator | 2026-02-14 06:38:40.248368 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:38:40.248378 | orchestrator | Saturday 14 February 2026 06:38:15 +0000 (0:00:01.470) 1:01:27.504 ***** 2026-02-14 06:38:40.248387 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248397 | orchestrator | 2026-02-14 06:38:40.248406 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:38:40.248416 | orchestrator | Saturday 14 February 2026 06:38:16 +0000 (0:00:01.161) 1:01:28.665 ***** 2026-02-14 06:38:40.248425 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248435 | orchestrator | 2026-02-14 06:38:40.248444 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:38:40.248454 | orchestrator | Saturday 14 February 2026 06:38:17 +0000 (0:00:01.252) 1:01:29.918 ***** 2026-02-14 06:38:40.248465 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248477 | orchestrator | 2026-02-14 06:38:40.248488 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:38:40.248499 | orchestrator | Saturday 14 February 2026 06:38:18 +0000 (0:00:01.158) 1:01:31.076 ***** 2026-02-14 06:38:40.248526 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-14 06:38:40.248545 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-14 06:38:40.248559 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-14 06:38:40.248569 | orchestrator | 2026-02-14 06:38:40.248580 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:38:40.248607 | orchestrator | Saturday 14 February 2026 06:38:20 +0000 (0:00:02.199) 1:01:33.276 ***** 2026-02-14 06:38:40.248619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-14 06:38:40.248631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-14 06:38:40.248643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-14 06:38:40.248654 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248665 | orchestrator | 2026-02-14 06:38:40.248676 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:38:40.248687 | orchestrator | Saturday 14 February 2026 06:38:22 +0000 (0:00:01.169) 1:01:34.446 ***** 2026-02-14 06:38:40.248716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-14 06:38:40.248728 | orchestrator | 2026-02-14 06:38:40.248739 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:38:40.248749 | orchestrator | Saturday 14 February 2026 06:38:23 +0000 (0:00:01.121) 1:01:35.567 ***** 2026-02-14 06:38:40.248759 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248769 | orchestrator | 2026-02-14 06:38:40.248778 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:38:40.248788 | orchestrator | Saturday 14 February 2026 06:38:24 +0000 (0:00:01.265) 1:01:36.833 ***** 2026-02-14 06:38:40.248797 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248807 | orchestrator | 2026-02-14 06:38:40.248816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:38:40.248826 | orchestrator | Saturday 14 February 2026 06:38:25 +0000 (0:00:01.133) 1:01:37.966 ***** 2026-02-14 06:38:40.248835 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248845 | orchestrator | 2026-02-14 06:38:40.248854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:38:40.248864 | orchestrator | Saturday 14 February 2026 06:38:26 +0000 (0:00:01.151) 1:01:39.118 ***** 2026-02-14 06:38:40.248873 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:40.248882 | orchestrator | 2026-02-14 06:38:40.248892 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:38:40.248901 | orchestrator | Saturday 14 February 2026 06:38:28 +0000 (0:00:01.285) 1:01:40.403 ***** 2026-02-14 06:38:40.248911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:38:40.248921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:38:40.248930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:38:40.248939 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.248949 | orchestrator | 2026-02-14 06:38:40.248961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:38:40.248977 | orchestrator | Saturday 14 February 2026 06:38:29 +0000 (0:00:01.462) 1:01:41.865 ***** 2026-02-14 06:38:40.248996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:38:40.249021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:38:40.249035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:38:40.249051 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.249083 | orchestrator | 2026-02-14 06:38:40.249099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:38:40.249113 | orchestrator | Saturday 14 February 2026 06:38:30 +0000 (0:00:01.410) 1:01:43.276 ***** 2026-02-14 06:38:40.249142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:38:40.249170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:38:40.249184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:38:40.249200 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:38:40.249215 | orchestrator | 2026-02-14 06:38:40.249232 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:38:40.249249 | orchestrator | Saturday 14 February 2026 06:38:32 +0000 (0:00:01.432) 1:01:44.709 ***** 2026-02-14 06:38:40.249262 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:38:40.249277 | orchestrator | 2026-02-14 06:38:40.249294 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:38:40.249336 | orchestrator | Saturday 14 February 2026 06:38:33 +0000 (0:00:01.162) 1:01:45.871 ***** 2026-02-14 06:38:40.249346 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:38:40.249361 | orchestrator | 2026-02-14 06:38:40.249377 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:38:40.249394 | orchestrator | Saturday 14 February 2026 06:38:35 +0000 (0:00:01.700) 1:01:47.572 ***** 2026-02-14 06:38:40.249404 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:38:40.249413 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:38:40.249430 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:38:40.249447 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 06:38:40.249460 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:38:40.249469 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:38:40.249478 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:38:40.249488 | orchestrator | 2026-02-14 06:38:40.249497 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:38:40.249507 | orchestrator | Saturday 14 February 2026 06:38:37 +0000 (0:00:02.268) 1:01:49.840 ***** 2026-02-14 06:38:40.249516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:38:40.249525 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:38:40.249543 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:38:40.249564 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-14 06:38:40.249588 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:38:40.249604 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:38:40.249619 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:38:40.249634 | orchestrator | 2026-02-14 06:38:40.249664 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-14 06:39:33.591547 | orchestrator | Saturday 14 February 2026 06:38:40 +0000 (0:00:02.703) 1:01:52.543 ***** 2026-02-14 06:39:33.591665 | orchestrator | changed: [testbed-node-3] 2026-02-14 06:39:33.591682 | orchestrator | 2026-02-14 06:39:33.591695 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-14 06:39:33.591707 | orchestrator | Saturday 14 February 2026 06:38:42 +0000 (0:00:02.302) 1:01:54.846 ***** 2026-02-14 06:39:33.591719 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:39:33.591731 | orchestrator | 2026-02-14 06:39:33.591742 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-14 06:39:33.591753 | orchestrator | Saturday 14 February 2026 06:38:45 +0000 (0:00:02.953) 1:01:57.800 ***** 2026-02-14 06:39:33.591764 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:39:33.591799 | orchestrator | 2026-02-14 06:39:33.591810 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:39:33.591821 | orchestrator | Saturday 14 February 2026 06:38:47 +0000 (0:00:02.289) 1:02:00.089 ***** 2026-02-14 06:39:33.591832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-14 06:39:33.591843 | orchestrator | 2026-02-14 06:39:33.591853 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:39:33.591864 | orchestrator | Saturday 14 February 2026 06:38:48 +0000 (0:00:01.132) 1:02:01.222 ***** 2026-02-14 06:39:33.591875 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-14 06:39:33.591885 | orchestrator | 2026-02-14 06:39:33.591896 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:39:33.591907 | orchestrator | Saturday 14 February 2026 06:38:50 +0000 (0:00:01.143) 1:02:02.365 ***** 2026-02-14 06:39:33.591917 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.591928 | orchestrator | 2026-02-14 06:39:33.591939 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:39:33.591949 | orchestrator | Saturday 14 February 2026 06:38:51 +0000 (0:00:01.137) 1:02:03.503 ***** 2026-02-14 06:39:33.591960 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.591972 | orchestrator | 2026-02-14 06:39:33.591982 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:39:33.591993 | orchestrator | Saturday 14 February 2026 06:38:52 +0000 (0:00:01.545) 1:02:05.048 ***** 2026-02-14 06:39:33.592003 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592014 | orchestrator | 2026-02-14 06:39:33.592025 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:39:33.592035 | orchestrator | Saturday 14 February 2026 06:38:54 +0000 (0:00:01.525) 1:02:06.573 ***** 2026-02-14 06:39:33.592046 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592056 | orchestrator | 2026-02-14 06:39:33.592067 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:39:33.592079 | orchestrator | Saturday 14 February 2026 06:38:55 +0000 (0:00:01.602) 1:02:08.176 ***** 2026-02-14 06:39:33.592092 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592104 | orchestrator | 2026-02-14 06:39:33.592116 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:39:33.592128 | orchestrator | Saturday 14 February 2026 06:38:56 +0000 (0:00:01.111) 1:02:09.287 ***** 2026-02-14 06:39:33.592140 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592153 | orchestrator | 2026-02-14 06:39:33.592165 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:39:33.592177 | orchestrator | Saturday 14 February 2026 06:38:58 +0000 (0:00:01.157) 1:02:10.445 ***** 2026-02-14 06:39:33.592189 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592201 | orchestrator | 2026-02-14 06:39:33.592213 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:39:33.592225 | orchestrator | Saturday 14 February 2026 06:38:59 +0000 (0:00:01.165) 1:02:11.611 ***** 2026-02-14 06:39:33.592237 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592250 | orchestrator | 2026-02-14 06:39:33.592262 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:39:33.592274 | orchestrator | Saturday 14 February 2026 06:39:00 +0000 (0:00:01.631) 1:02:13.243 ***** 2026-02-14 06:39:33.592286 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592299 | orchestrator | 2026-02-14 06:39:33.592311 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:39:33.592323 | orchestrator | Saturday 14 February 2026 06:39:02 +0000 (0:00:01.541) 1:02:14.785 ***** 2026-02-14 06:39:33.592336 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592388 | orchestrator | 2026-02-14 06:39:33.592402 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:39:33.592422 | orchestrator | Saturday 14 February 2026 06:39:03 +0000 (0:00:01.127) 1:02:15.912 ***** 2026-02-14 06:39:33.592435 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592446 | orchestrator | 2026-02-14 06:39:33.592457 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:39:33.592467 | orchestrator | Saturday 14 February 2026 06:39:04 +0000 (0:00:01.155) 1:02:17.068 ***** 2026-02-14 06:39:33.592478 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592488 | orchestrator | 2026-02-14 06:39:33.592517 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:39:33.592528 | orchestrator | Saturday 14 February 2026 06:39:05 +0000 (0:00:01.201) 1:02:18.270 ***** 2026-02-14 06:39:33.592539 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592550 | orchestrator | 2026-02-14 06:39:33.592560 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:39:33.592571 | orchestrator | Saturday 14 February 2026 06:39:07 +0000 (0:00:01.164) 1:02:19.434 ***** 2026-02-14 06:39:33.592581 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592592 | orchestrator | 2026-02-14 06:39:33.592621 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:39:33.592633 | orchestrator | Saturday 14 February 2026 06:39:08 +0000 (0:00:01.179) 1:02:20.614 ***** 2026-02-14 06:39:33.592643 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592654 | orchestrator | 2026-02-14 06:39:33.592665 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:39:33.592676 | orchestrator | Saturday 14 February 2026 06:39:09 +0000 (0:00:01.159) 1:02:21.773 ***** 2026-02-14 06:39:33.592686 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592697 | orchestrator | 2026-02-14 06:39:33.592708 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:39:33.592719 | orchestrator | Saturday 14 February 2026 06:39:10 +0000 (0:00:01.167) 1:02:22.940 ***** 2026-02-14 06:39:33.592729 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592740 | orchestrator | 2026-02-14 06:39:33.592751 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:39:33.592762 | orchestrator | Saturday 14 February 2026 06:39:11 +0000 (0:00:01.216) 1:02:24.157 ***** 2026-02-14 06:39:33.592773 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592783 | orchestrator | 2026-02-14 06:39:33.592794 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:39:33.592805 | orchestrator | Saturday 14 February 2026 06:39:13 +0000 (0:00:01.321) 1:02:25.479 ***** 2026-02-14 06:39:33.592815 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.592826 | orchestrator | 2026-02-14 06:39:33.592837 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:39:33.592847 | orchestrator | Saturday 14 February 2026 06:39:14 +0000 (0:00:01.220) 1:02:26.699 ***** 2026-02-14 06:39:33.592858 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592869 | orchestrator | 2026-02-14 06:39:33.592879 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:39:33.592890 | orchestrator | Saturday 14 February 2026 06:39:15 +0000 (0:00:01.168) 1:02:27.867 ***** 2026-02-14 06:39:33.592901 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592911 | orchestrator | 2026-02-14 06:39:33.592922 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:39:33.592933 | orchestrator | Saturday 14 February 2026 06:39:16 +0000 (0:00:01.128) 1:02:28.995 ***** 2026-02-14 06:39:33.592943 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592954 | orchestrator | 2026-02-14 06:39:33.592965 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:39:33.592976 | orchestrator | Saturday 14 February 2026 06:39:17 +0000 (0:00:01.132) 1:02:30.128 ***** 2026-02-14 06:39:33.592986 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.592997 | orchestrator | 2026-02-14 06:39:33.593008 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:39:33.593025 | orchestrator | Saturday 14 February 2026 06:39:18 +0000 (0:00:01.147) 1:02:31.276 ***** 2026-02-14 06:39:33.593036 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593047 | orchestrator | 2026-02-14 06:39:33.593057 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:39:33.593068 | orchestrator | Saturday 14 February 2026 06:39:20 +0000 (0:00:01.122) 1:02:32.398 ***** 2026-02-14 06:39:33.593079 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593089 | orchestrator | 2026-02-14 06:39:33.593100 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:39:33.593111 | orchestrator | Saturday 14 February 2026 06:39:21 +0000 (0:00:01.131) 1:02:33.530 ***** 2026-02-14 06:39:33.593121 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593132 | orchestrator | 2026-02-14 06:39:33.593143 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:39:33.593154 | orchestrator | Saturday 14 February 2026 06:39:22 +0000 (0:00:01.146) 1:02:34.676 ***** 2026-02-14 06:39:33.593165 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593175 | orchestrator | 2026-02-14 06:39:33.593186 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:39:33.593197 | orchestrator | Saturday 14 February 2026 06:39:23 +0000 (0:00:01.160) 1:02:35.837 ***** 2026-02-14 06:39:33.593207 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593218 | orchestrator | 2026-02-14 06:39:33.593229 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:39:33.593240 | orchestrator | Saturday 14 February 2026 06:39:24 +0000 (0:00:01.170) 1:02:37.007 ***** 2026-02-14 06:39:33.593250 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593261 | orchestrator | 2026-02-14 06:39:33.593272 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:39:33.593283 | orchestrator | Saturday 14 February 2026 06:39:25 +0000 (0:00:01.198) 1:02:38.205 ***** 2026-02-14 06:39:33.593293 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593304 | orchestrator | 2026-02-14 06:39:33.593315 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:39:33.593326 | orchestrator | Saturday 14 February 2026 06:39:27 +0000 (0:00:01.179) 1:02:39.385 ***** 2026-02-14 06:39:33.593336 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:39:33.593364 | orchestrator | 2026-02-14 06:39:33.593376 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:39:33.593387 | orchestrator | Saturday 14 February 2026 06:39:28 +0000 (0:00:01.175) 1:02:40.560 ***** 2026-02-14 06:39:33.593397 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.593408 | orchestrator | 2026-02-14 06:39:33.593419 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:39:33.593435 | orchestrator | Saturday 14 February 2026 06:39:30 +0000 (0:00:01.967) 1:02:42.529 ***** 2026-02-14 06:39:33.593446 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:39:33.593456 | orchestrator | 2026-02-14 06:39:33.593467 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:39:33.593478 | orchestrator | Saturday 14 February 2026 06:39:32 +0000 (0:00:02.250) 1:02:44.779 ***** 2026-02-14 06:39:33.593488 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-14 06:39:33.593499 | orchestrator | 2026-02-14 06:39:33.593510 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:39:33.593528 | orchestrator | Saturday 14 February 2026 06:39:33 +0000 (0:00:01.123) 1:02:45.903 ***** 2026-02-14 06:40:20.937932 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938059 | orchestrator | 2026-02-14 06:40:20.938069 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:40:20.938075 | orchestrator | Saturday 14 February 2026 06:39:34 +0000 (0:00:01.181) 1:02:47.084 ***** 2026-02-14 06:40:20.938080 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938085 | orchestrator | 2026-02-14 06:40:20.938108 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:40:20.938114 | orchestrator | Saturday 14 February 2026 06:39:35 +0000 (0:00:01.181) 1:02:48.266 ***** 2026-02-14 06:40:20.938119 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:40:20.938124 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:40:20.938129 | orchestrator | 2026-02-14 06:40:20.938134 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:40:20.938139 | orchestrator | Saturday 14 February 2026 06:39:37 +0000 (0:00:01.794) 1:02:50.060 ***** 2026-02-14 06:40:20.938144 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:40:20.938150 | orchestrator | 2026-02-14 06:40:20.938155 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:40:20.938160 | orchestrator | Saturday 14 February 2026 06:39:39 +0000 (0:00:01.464) 1:02:51.525 ***** 2026-02-14 06:40:20.938165 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938170 | orchestrator | 2026-02-14 06:40:20.938175 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:40:20.938179 | orchestrator | Saturday 14 February 2026 06:39:40 +0000 (0:00:01.165) 1:02:52.690 ***** 2026-02-14 06:40:20.938184 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938189 | orchestrator | 2026-02-14 06:40:20.938194 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:40:20.938198 | orchestrator | Saturday 14 February 2026 06:39:41 +0000 (0:00:01.144) 1:02:53.835 ***** 2026-02-14 06:40:20.938203 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938208 | orchestrator | 2026-02-14 06:40:20.938213 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:40:20.938217 | orchestrator | Saturday 14 February 2026 06:39:42 +0000 (0:00:01.218) 1:02:55.053 ***** 2026-02-14 06:40:20.938222 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-14 06:40:20.938227 | orchestrator | 2026-02-14 06:40:20.938232 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:40:20.938237 | orchestrator | Saturday 14 February 2026 06:39:43 +0000 (0:00:01.187) 1:02:56.241 ***** 2026-02-14 06:40:20.938241 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:40:20.938246 | orchestrator | 2026-02-14 06:40:20.938251 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:40:20.938256 | orchestrator | Saturday 14 February 2026 06:39:45 +0000 (0:00:01.743) 1:02:57.984 ***** 2026-02-14 06:40:20.938260 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:40:20.938265 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:40:20.938270 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:40:20.938274 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938279 | orchestrator | 2026-02-14 06:40:20.938284 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:40:20.938288 | orchestrator | Saturday 14 February 2026 06:39:46 +0000 (0:00:01.186) 1:02:59.170 ***** 2026-02-14 06:40:20.938293 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938298 | orchestrator | 2026-02-14 06:40:20.938302 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:40:20.938307 | orchestrator | Saturday 14 February 2026 06:39:47 +0000 (0:00:01.134) 1:03:00.305 ***** 2026-02-14 06:40:20.938312 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938317 | orchestrator | 2026-02-14 06:40:20.938321 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:40:20.938326 | orchestrator | Saturday 14 February 2026 06:39:49 +0000 (0:00:01.199) 1:03:01.505 ***** 2026-02-14 06:40:20.938331 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938335 | orchestrator | 2026-02-14 06:40:20.938340 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:40:20.938349 | orchestrator | Saturday 14 February 2026 06:39:50 +0000 (0:00:01.258) 1:03:02.763 ***** 2026-02-14 06:40:20.938354 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938359 | orchestrator | 2026-02-14 06:40:20.938363 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:40:20.938368 | orchestrator | Saturday 14 February 2026 06:39:51 +0000 (0:00:01.139) 1:03:03.903 ***** 2026-02-14 06:40:20.938373 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938378 | orchestrator | 2026-02-14 06:40:20.938403 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:40:20.938408 | orchestrator | Saturday 14 February 2026 06:39:52 +0000 (0:00:01.164) 1:03:05.068 ***** 2026-02-14 06:40:20.938413 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:40:20.938418 | orchestrator | 2026-02-14 06:40:20.938436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:40:20.938441 | orchestrator | Saturday 14 February 2026 06:39:55 +0000 (0:00:02.501) 1:03:07.569 ***** 2026-02-14 06:40:20.938446 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:40:20.938450 | orchestrator | 2026-02-14 06:40:20.938455 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:40:20.938460 | orchestrator | Saturday 14 February 2026 06:39:56 +0000 (0:00:01.200) 1:03:08.770 ***** 2026-02-14 06:40:20.938464 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-14 06:40:20.938469 | orchestrator | 2026-02-14 06:40:20.938474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:40:20.938491 | orchestrator | Saturday 14 February 2026 06:39:57 +0000 (0:00:01.271) 1:03:10.041 ***** 2026-02-14 06:40:20.938497 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938502 | orchestrator | 2026-02-14 06:40:20.938507 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:40:20.938513 | orchestrator | Saturday 14 February 2026 06:39:58 +0000 (0:00:01.186) 1:03:11.228 ***** 2026-02-14 06:40:20.938519 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938524 | orchestrator | 2026-02-14 06:40:20.938529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:40:20.938535 | orchestrator | Saturday 14 February 2026 06:40:00 +0000 (0:00:01.148) 1:03:12.376 ***** 2026-02-14 06:40:20.938540 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938546 | orchestrator | 2026-02-14 06:40:20.938551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:40:20.938563 | orchestrator | Saturday 14 February 2026 06:40:01 +0000 (0:00:01.174) 1:03:13.550 ***** 2026-02-14 06:40:20.938569 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938575 | orchestrator | 2026-02-14 06:40:20.938580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:40:20.938586 | orchestrator | Saturday 14 February 2026 06:40:02 +0000 (0:00:01.165) 1:03:14.716 ***** 2026-02-14 06:40:20.938591 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938597 | orchestrator | 2026-02-14 06:40:20.938602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:40:20.938607 | orchestrator | Saturday 14 February 2026 06:40:03 +0000 (0:00:01.156) 1:03:15.873 ***** 2026-02-14 06:40:20.938613 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938618 | orchestrator | 2026-02-14 06:40:20.938624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:40:20.938629 | orchestrator | Saturday 14 February 2026 06:40:04 +0000 (0:00:01.156) 1:03:17.029 ***** 2026-02-14 06:40:20.938634 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938640 | orchestrator | 2026-02-14 06:40:20.938645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:40:20.938650 | orchestrator | Saturday 14 February 2026 06:40:05 +0000 (0:00:01.189) 1:03:18.218 ***** 2026-02-14 06:40:20.938656 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:40:20.938665 | orchestrator | 2026-02-14 06:40:20.938670 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:40:20.938676 | orchestrator | Saturday 14 February 2026 06:40:07 +0000 (0:00:01.150) 1:03:19.369 ***** 2026-02-14 06:40:20.938681 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:40:20.938687 | orchestrator | 2026-02-14 06:40:20.938692 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:40:20.938698 | orchestrator | Saturday 14 February 2026 06:40:08 +0000 (0:00:01.184) 1:03:20.553 ***** 2026-02-14 06:40:20.938703 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-14 06:40:20.938709 | orchestrator | 2026-02-14 06:40:20.938713 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:40:20.938718 | orchestrator | Saturday 14 February 2026 06:40:09 +0000 (0:00:01.153) 1:03:21.706 ***** 2026-02-14 06:40:20.938723 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-14 06:40:20.938728 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-14 06:40:20.938733 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-14 06:40:20.938738 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-14 06:40:20.938742 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-14 06:40:20.938747 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-14 06:40:20.938752 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-14 06:40:20.938756 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:40:20.938762 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:40:20.938766 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:40:20.938771 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:40:20.938776 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:40:20.938781 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:40:20.938785 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:40:20.938790 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-14 06:40:20.938795 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-14 06:40:20.938800 | orchestrator | 2026-02-14 06:40:20.938804 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:40:20.938809 | orchestrator | Saturday 14 February 2026 06:40:16 +0000 (0:00:06.748) 1:03:28.455 ***** 2026-02-14 06:40:20.938814 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-14 06:40:20.938818 | orchestrator | 2026-02-14 06:40:20.938823 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:40:20.938828 | orchestrator | Saturday 14 February 2026 06:40:17 +0000 (0:00:01.279) 1:03:29.734 ***** 2026-02-14 06:40:20.938835 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:40:20.938841 | orchestrator | 2026-02-14 06:40:20.938846 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:40:20.938851 | orchestrator | Saturday 14 February 2026 06:40:18 +0000 (0:00:01.555) 1:03:31.289 ***** 2026-02-14 06:40:20.938856 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:40:20.938860 | orchestrator | 2026-02-14 06:40:20.938865 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:40:20.938873 | orchestrator | Saturday 14 February 2026 06:40:20 +0000 (0:00:01.957) 1:03:33.247 ***** 2026-02-14 06:41:12.700139 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700257 | orchestrator | 2026-02-14 06:41:12.700273 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:41:12.700286 | orchestrator | Saturday 14 February 2026 06:40:22 +0000 (0:00:01.201) 1:03:34.448 ***** 2026-02-14 06:41:12.700328 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700341 | orchestrator | 2026-02-14 06:41:12.700352 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:41:12.700363 | orchestrator | Saturday 14 February 2026 06:40:23 +0000 (0:00:01.132) 1:03:35.580 ***** 2026-02-14 06:41:12.700374 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700385 | orchestrator | 2026-02-14 06:41:12.700396 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:41:12.700407 | orchestrator | Saturday 14 February 2026 06:40:24 +0000 (0:00:01.129) 1:03:36.710 ***** 2026-02-14 06:41:12.700418 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700472 | orchestrator | 2026-02-14 06:41:12.700483 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:41:12.700494 | orchestrator | Saturday 14 February 2026 06:40:25 +0000 (0:00:01.115) 1:03:37.826 ***** 2026-02-14 06:41:12.700505 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700516 | orchestrator | 2026-02-14 06:41:12.700526 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:41:12.700539 | orchestrator | Saturday 14 February 2026 06:40:26 +0000 (0:00:01.101) 1:03:38.928 ***** 2026-02-14 06:41:12.700550 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700561 | orchestrator | 2026-02-14 06:41:12.700572 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:41:12.700582 | orchestrator | Saturday 14 February 2026 06:40:27 +0000 (0:00:01.127) 1:03:40.056 ***** 2026-02-14 06:41:12.700593 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700604 | orchestrator | 2026-02-14 06:41:12.700615 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:41:12.700626 | orchestrator | Saturday 14 February 2026 06:40:28 +0000 (0:00:01.130) 1:03:41.186 ***** 2026-02-14 06:41:12.700636 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700647 | orchestrator | 2026-02-14 06:41:12.700658 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:41:12.700668 | orchestrator | Saturday 14 February 2026 06:40:30 +0000 (0:00:01.187) 1:03:42.374 ***** 2026-02-14 06:41:12.700680 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700691 | orchestrator | 2026-02-14 06:41:12.700702 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:41:12.700713 | orchestrator | Saturday 14 February 2026 06:40:31 +0000 (0:00:01.176) 1:03:43.550 ***** 2026-02-14 06:41:12.700723 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700734 | orchestrator | 2026-02-14 06:41:12.700745 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:41:12.700756 | orchestrator | Saturday 14 February 2026 06:40:32 +0000 (0:00:01.342) 1:03:44.893 ***** 2026-02-14 06:41:12.700766 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700777 | orchestrator | 2026-02-14 06:41:12.700788 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:41:12.700798 | orchestrator | Saturday 14 February 2026 06:40:33 +0000 (0:00:01.143) 1:03:46.037 ***** 2026-02-14 06:41:12.700809 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:41:12.700820 | orchestrator | 2026-02-14 06:41:12.700831 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:41:12.700841 | orchestrator | Saturday 14 February 2026 06:40:38 +0000 (0:00:04.418) 1:03:50.455 ***** 2026-02-14 06:41:12.700853 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:41:12.700865 | orchestrator | 2026-02-14 06:41:12.700876 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:41:12.700887 | orchestrator | Saturday 14 February 2026 06:40:39 +0000 (0:00:01.218) 1:03:51.673 ***** 2026-02-14 06:41:12.700908 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-14 06:41:12.700923 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-14 06:41:12.700935 | orchestrator | 2026-02-14 06:41:12.700962 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:41:12.700974 | orchestrator | Saturday 14 February 2026 06:40:44 +0000 (0:00:04.895) 1:03:56.570 ***** 2026-02-14 06:41:12.700984 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.700995 | orchestrator | 2026-02-14 06:41:12.701006 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:41:12.701017 | orchestrator | Saturday 14 February 2026 06:40:45 +0000 (0:00:01.201) 1:03:57.771 ***** 2026-02-14 06:41:12.701028 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701038 | orchestrator | 2026-02-14 06:41:12.701049 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:41:12.701078 | orchestrator | Saturday 14 February 2026 06:40:46 +0000 (0:00:01.190) 1:03:58.963 ***** 2026-02-14 06:41:12.701091 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701102 | orchestrator | 2026-02-14 06:41:12.701113 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:41:12.701123 | orchestrator | Saturday 14 February 2026 06:40:47 +0000 (0:00:01.219) 1:04:00.182 ***** 2026-02-14 06:41:12.701134 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701145 | orchestrator | 2026-02-14 06:41:12.701156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:41:12.701166 | orchestrator | Saturday 14 February 2026 06:40:49 +0000 (0:00:01.216) 1:04:01.398 ***** 2026-02-14 06:41:12.701177 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701188 | orchestrator | 2026-02-14 06:41:12.701199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:41:12.701210 | orchestrator | Saturday 14 February 2026 06:40:50 +0000 (0:00:01.149) 1:04:02.548 ***** 2026-02-14 06:41:12.701221 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:41:12.701232 | orchestrator | 2026-02-14 06:41:12.701243 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:41:12.701254 | orchestrator | Saturday 14 February 2026 06:40:51 +0000 (0:00:01.265) 1:04:03.814 ***** 2026-02-14 06:41:12.701265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:41:12.701276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:41:12.701287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:41:12.701298 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701309 | orchestrator | 2026-02-14 06:41:12.701320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:41:12.701330 | orchestrator | Saturday 14 February 2026 06:40:53 +0000 (0:00:01.825) 1:04:05.640 ***** 2026-02-14 06:41:12.701341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:41:12.701352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:41:12.701363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:41:12.701374 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701384 | orchestrator | 2026-02-14 06:41:12.701395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:41:12.701406 | orchestrator | Saturday 14 February 2026 06:40:55 +0000 (0:00:01.807) 1:04:07.448 ***** 2026-02-14 06:41:12.701454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-14 06:41:12.701467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-14 06:41:12.701477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-14 06:41:12.701488 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701499 | orchestrator | 2026-02-14 06:41:12.701510 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:41:12.701521 | orchestrator | Saturday 14 February 2026 06:40:57 +0000 (0:00:01.915) 1:04:09.364 ***** 2026-02-14 06:41:12.701531 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:41:12.701542 | orchestrator | 2026-02-14 06:41:12.701561 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:41:12.701581 | orchestrator | Saturday 14 February 2026 06:40:58 +0000 (0:00:01.165) 1:04:10.529 ***** 2026-02-14 06:41:12.701600 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-14 06:41:12.701620 | orchestrator | 2026-02-14 06:41:12.701639 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:41:12.701657 | orchestrator | Saturday 14 February 2026 06:40:59 +0000 (0:00:01.360) 1:04:11.890 ***** 2026-02-14 06:41:12.701676 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:41:12.701696 | orchestrator | 2026-02-14 06:41:12.701717 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-14 06:41:12.701736 | orchestrator | Saturday 14 February 2026 06:41:01 +0000 (0:00:01.755) 1:04:13.645 ***** 2026-02-14 06:41:12.701755 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-14 06:41:12.701766 | orchestrator | 2026-02-14 06:41:12.701777 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:41:12.701788 | orchestrator | Saturday 14 February 2026 06:41:02 +0000 (0:00:01.488) 1:04:15.134 ***** 2026-02-14 06:41:12.701798 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:41:12.701809 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 06:41:12.701820 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:41:12.701830 | orchestrator | 2026-02-14 06:41:12.701841 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:41:12.701852 | orchestrator | Saturday 14 February 2026 06:41:06 +0000 (0:00:03.252) 1:04:18.386 ***** 2026-02-14 06:41:12.701862 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-14 06:41:12.701873 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-14 06:41:12.701884 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:41:12.701895 | orchestrator | 2026-02-14 06:41:12.701905 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-14 06:41:12.701916 | orchestrator | Saturday 14 February 2026 06:41:08 +0000 (0:00:01.959) 1:04:20.346 ***** 2026-02-14 06:41:12.701935 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:41:12.701946 | orchestrator | 2026-02-14 06:41:12.701956 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-14 06:41:12.701967 | orchestrator | Saturday 14 February 2026 06:41:09 +0000 (0:00:01.151) 1:04:21.497 ***** 2026-02-14 06:41:12.701978 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-14 06:41:12.701989 | orchestrator | 2026-02-14 06:41:12.702000 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-14 06:41:12.702010 | orchestrator | Saturday 14 February 2026 06:41:10 +0000 (0:00:01.537) 1:04:23.035 ***** 2026-02-14 06:41:12.702095 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:42:26.601671 | orchestrator | 2026-02-14 06:42:26.601790 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-14 06:42:26.601807 | orchestrator | Saturday 14 February 2026 06:41:12 +0000 (0:00:01.979) 1:04:25.014 ***** 2026-02-14 06:42:26.601846 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:42:26.601859 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 06:42:26.601871 | orchestrator | 2026-02-14 06:42:26.601882 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:42:26.601893 | orchestrator | Saturday 14 February 2026 06:41:17 +0000 (0:00:05.215) 1:04:30.230 ***** 2026-02-14 06:42:26.601904 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:42:26.601915 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:42:26.601926 | orchestrator | 2026-02-14 06:42:26.601936 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:42:26.601947 | orchestrator | Saturday 14 February 2026 06:41:21 +0000 (0:00:03.147) 1:04:33.378 ***** 2026-02-14 06:42:26.601958 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-14 06:42:26.601969 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:42:26.601981 | orchestrator | 2026-02-14 06:42:26.601992 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-14 06:42:26.602002 | orchestrator | Saturday 14 February 2026 06:41:23 +0000 (0:00:02.006) 1:04:35.385 ***** 2026-02-14 06:42:26.602013 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-14 06:42:26.602088 | orchestrator | 2026-02-14 06:42:26.602100 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-14 06:42:26.602110 | orchestrator | Saturday 14 February 2026 06:41:24 +0000 (0:00:01.495) 1:04:36.881 ***** 2026-02-14 06:42:26.602121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602218 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:42:26.602231 | orchestrator | 2026-02-14 06:42:26.602243 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-14 06:42:26.602256 | orchestrator | Saturday 14 February 2026 06:41:26 +0000 (0:00:01.697) 1:04:38.578 ***** 2026-02-14 06:42:26.602268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:42:26.602328 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:42:26.602340 | orchestrator | 2026-02-14 06:42:26.602353 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-14 06:42:26.602364 | orchestrator | Saturday 14 February 2026 06:41:27 +0000 (0:00:01.674) 1:04:40.252 ***** 2026-02-14 06:42:26.602377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:42:26.602400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:42:26.602429 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:42:26.602441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:42:26.602453 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:42:26.602463 | orchestrator | 2026-02-14 06:42:26.602474 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-14 06:42:26.602532 | orchestrator | Saturday 14 February 2026 06:41:58 +0000 (0:00:30.815) 1:05:11.068 ***** 2026-02-14 06:42:26.602546 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:42:26.602557 | orchestrator | 2026-02-14 06:42:26.602568 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-14 06:42:26.602578 | orchestrator | Saturday 14 February 2026 06:41:59 +0000 (0:00:01.211) 1:05:12.280 ***** 2026-02-14 06:42:26.602589 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:42:26.602600 | orchestrator | 2026-02-14 06:42:26.602610 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-14 06:42:26.602621 | orchestrator | Saturday 14 February 2026 06:42:01 +0000 (0:00:01.100) 1:05:13.381 ***** 2026-02-14 06:42:26.602631 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-14 06:42:26.602642 | orchestrator | 2026-02-14 06:42:26.602653 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-14 06:42:26.602663 | orchestrator | Saturday 14 February 2026 06:42:02 +0000 (0:00:01.493) 1:05:14.874 ***** 2026-02-14 06:42:26.602674 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-14 06:42:26.602685 | orchestrator | 2026-02-14 06:42:26.602696 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-14 06:42:26.602707 | orchestrator | Saturday 14 February 2026 06:42:04 +0000 (0:00:01.664) 1:05:16.538 ***** 2026-02-14 06:42:26.602717 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:42:26.602728 | orchestrator | 2026-02-14 06:42:26.602739 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-14 06:42:26.602749 | orchestrator | Saturday 14 February 2026 06:42:06 +0000 (0:00:02.068) 1:05:18.607 ***** 2026-02-14 06:42:26.602760 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:42:26.602770 | orchestrator | 2026-02-14 06:42:26.602781 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-14 06:42:26.602792 | orchestrator | Saturday 14 February 2026 06:42:08 +0000 (0:00:01.997) 1:05:20.605 ***** 2026-02-14 06:42:26.602802 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:42:26.602813 | orchestrator | 2026-02-14 06:42:26.602824 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-14 06:42:26.602834 | orchestrator | Saturday 14 February 2026 06:42:10 +0000 (0:00:02.378) 1:05:22.984 ***** 2026-02-14 06:42:26.602845 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-14 06:42:26.602856 | orchestrator | 2026-02-14 06:42:26.602866 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-14 06:42:26.602877 | orchestrator | 2026-02-14 06:42:26.602887 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:42:26.602898 | orchestrator | Saturday 14 February 2026 06:42:13 +0000 (0:00:02.851) 1:05:25.835 ***** 2026-02-14 06:42:26.602909 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-14 06:42:26.602927 | orchestrator | 2026-02-14 06:42:26.602938 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:42:26.602948 | orchestrator | Saturday 14 February 2026 06:42:14 +0000 (0:00:01.180) 1:05:27.016 ***** 2026-02-14 06:42:26.602959 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.602970 | orchestrator | 2026-02-14 06:42:26.602980 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:42:26.602991 | orchestrator | Saturday 14 February 2026 06:42:16 +0000 (0:00:01.525) 1:05:28.542 ***** 2026-02-14 06:42:26.603001 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603012 | orchestrator | 2026-02-14 06:42:26.603022 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:42:26.603033 | orchestrator | Saturday 14 February 2026 06:42:17 +0000 (0:00:01.136) 1:05:29.679 ***** 2026-02-14 06:42:26.603043 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603054 | orchestrator | 2026-02-14 06:42:26.603065 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:42:26.603075 | orchestrator | Saturday 14 February 2026 06:42:18 +0000 (0:00:01.436) 1:05:31.116 ***** 2026-02-14 06:42:26.603086 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603097 | orchestrator | 2026-02-14 06:42:26.603107 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:42:26.603118 | orchestrator | Saturday 14 February 2026 06:42:19 +0000 (0:00:01.185) 1:05:32.302 ***** 2026-02-14 06:42:26.603128 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603139 | orchestrator | 2026-02-14 06:42:26.603150 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:42:26.603160 | orchestrator | Saturday 14 February 2026 06:42:21 +0000 (0:00:01.220) 1:05:33.522 ***** 2026-02-14 06:42:26.603171 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603182 | orchestrator | 2026-02-14 06:42:26.603192 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:42:26.603203 | orchestrator | Saturday 14 February 2026 06:42:22 +0000 (0:00:01.183) 1:05:34.705 ***** 2026-02-14 06:42:26.603213 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:26.603224 | orchestrator | 2026-02-14 06:42:26.603235 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:42:26.603251 | orchestrator | Saturday 14 February 2026 06:42:23 +0000 (0:00:01.197) 1:05:35.903 ***** 2026-02-14 06:42:26.603262 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:26.603273 | orchestrator | 2026-02-14 06:42:26.603284 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:42:26.603294 | orchestrator | Saturday 14 February 2026 06:42:24 +0000 (0:00:01.173) 1:05:37.077 ***** 2026-02-14 06:42:26.603305 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:42:26.603315 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:42:26.603326 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:42:26.603337 | orchestrator | 2026-02-14 06:42:26.603348 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:42:26.603364 | orchestrator | Saturday 14 February 2026 06:42:26 +0000 (0:00:01.828) 1:05:38.906 ***** 2026-02-14 06:42:52.131179 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.131294 | orchestrator | 2026-02-14 06:42:52.131310 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:42:52.131324 | orchestrator | Saturday 14 February 2026 06:42:27 +0000 (0:00:01.297) 1:05:40.203 ***** 2026-02-14 06:42:52.131335 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:42:52.131347 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:42:52.131358 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:42:52.131369 | orchestrator | 2026-02-14 06:42:52.131408 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:42:52.131419 | orchestrator | Saturday 14 February 2026 06:42:30 +0000 (0:00:02.906) 1:05:43.110 ***** 2026-02-14 06:42:52.131431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:42:52.131442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:42:52.131452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:42:52.131464 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.131474 | orchestrator | 2026-02-14 06:42:52.131485 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:42:52.131496 | orchestrator | Saturday 14 February 2026 06:42:32 +0000 (0:00:01.430) 1:05:44.540 ***** 2026-02-14 06:42:52.131560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131619 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.131638 | orchestrator | 2026-02-14 06:42:52.131656 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:42:52.131674 | orchestrator | Saturday 14 February 2026 06:42:34 +0000 (0:00:02.011) 1:05:46.552 ***** 2026-02-14 06:42:52.131695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:52.131755 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.131776 | orchestrator | 2026-02-14 06:42:52.131798 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:42:52.131820 | orchestrator | Saturday 14 February 2026 06:42:35 +0000 (0:00:01.174) 1:05:47.726 ***** 2026-02-14 06:42:52.131881 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:42:28.430708', 'end': '2026-02-14 06:42:28.472418', 'delta': '0:00:00.041710', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:42:52.131909 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:42:28.973289', 'end': '2026-02-14 06:42:29.014082', 'delta': '0:00:00.040793', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:42:52.131921 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:42:29.570641', 'end': '2026-02-14 06:42:29.616051', 'delta': '0:00:00.045410', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:42:52.131932 | orchestrator | 2026-02-14 06:42:52.131943 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:42:52.131954 | orchestrator | Saturday 14 February 2026 06:42:36 +0000 (0:00:01.157) 1:05:48.884 ***** 2026-02-14 06:42:52.131965 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.131976 | orchestrator | 2026-02-14 06:42:52.131987 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:42:52.131997 | orchestrator | Saturday 14 February 2026 06:42:37 +0000 (0:00:01.244) 1:05:50.128 ***** 2026-02-14 06:42:52.132008 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132019 | orchestrator | 2026-02-14 06:42:52.132029 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:42:52.132040 | orchestrator | Saturday 14 February 2026 06:42:39 +0000 (0:00:01.727) 1:05:51.856 ***** 2026-02-14 06:42:52.132051 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.132061 | orchestrator | 2026-02-14 06:42:52.132072 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:42:52.132083 | orchestrator | Saturday 14 February 2026 06:42:40 +0000 (0:00:01.194) 1:05:53.050 ***** 2026-02-14 06:42:52.132093 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:42:52.132104 | orchestrator | 2026-02-14 06:42:52.132115 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:42:52.132126 | orchestrator | Saturday 14 February 2026 06:42:42 +0000 (0:00:01.989) 1:05:55.040 ***** 2026-02-14 06:42:52.132136 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.132147 | orchestrator | 2026-02-14 06:42:52.132158 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:42:52.132168 | orchestrator | Saturday 14 February 2026 06:42:43 +0000 (0:00:01.134) 1:05:56.174 ***** 2026-02-14 06:42:52.132179 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132190 | orchestrator | 2026-02-14 06:42:52.132200 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:42:52.132211 | orchestrator | Saturday 14 February 2026 06:42:45 +0000 (0:00:01.195) 1:05:57.370 ***** 2026-02-14 06:42:52.132222 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132240 | orchestrator | 2026-02-14 06:42:52.132250 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:42:52.132261 | orchestrator | Saturday 14 February 2026 06:42:46 +0000 (0:00:01.239) 1:05:58.610 ***** 2026-02-14 06:42:52.132272 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132282 | orchestrator | 2026-02-14 06:42:52.132293 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:42:52.132304 | orchestrator | Saturday 14 February 2026 06:42:47 +0000 (0:00:01.127) 1:05:59.737 ***** 2026-02-14 06:42:52.132314 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132325 | orchestrator | 2026-02-14 06:42:52.132341 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:42:52.132351 | orchestrator | Saturday 14 February 2026 06:42:48 +0000 (0:00:01.124) 1:06:00.862 ***** 2026-02-14 06:42:52.132362 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.132373 | orchestrator | 2026-02-14 06:42:52.132384 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:42:52.132395 | orchestrator | Saturday 14 February 2026 06:42:49 +0000 (0:00:01.199) 1:06:02.061 ***** 2026-02-14 06:42:52.132406 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:52.132416 | orchestrator | 2026-02-14 06:42:52.132427 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:42:52.132438 | orchestrator | Saturday 14 February 2026 06:42:50 +0000 (0:00:01.214) 1:06:03.275 ***** 2026-02-14 06:42:52.132448 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:52.132459 | orchestrator | 2026-02-14 06:42:52.132469 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:42:52.132487 | orchestrator | Saturday 14 February 2026 06:42:52 +0000 (0:00:01.164) 1:06:04.440 ***** 2026-02-14 06:42:54.735283 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:54.735413 | orchestrator | 2026-02-14 06:42:54.735439 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:42:54.735461 | orchestrator | Saturday 14 February 2026 06:42:53 +0000 (0:00:01.136) 1:06:05.576 ***** 2026-02-14 06:42:54.735481 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:42:54.735502 | orchestrator | 2026-02-14 06:42:54.735606 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:42:54.735625 | orchestrator | Saturday 14 February 2026 06:42:54 +0000 (0:00:01.231) 1:06:06.808 ***** 2026-02-14 06:42:54.735647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}})  2026-02-14 06:42:54.735680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:42:54.735722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}})  2026-02-14 06:42:54.735751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:42:54.735846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.735918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}})  2026-02-14 06:42:54.735937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}})  2026-02-14 06:42:54.735964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:54.736005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:42:56.244603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:56.244706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:42:56.244722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:42:56.244738 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:42:56.244752 | orchestrator | 2026-02-14 06:42:56.244766 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:42:56.244780 | orchestrator | Saturday 14 February 2026 06:42:56 +0000 (0:00:01.546) 1:06:08.355 ***** 2026-02-14 06:42:56.244812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244826 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091', 'dm-uuid-LVM-EB1XqRdFm5BWl32sOsML4BzRiPAaSfab8xK25yZZCddpKgHxc3NQuNizerGpwRdL'], 'uuids': ['cbd2394d-6972-4905-b52e-c3fabde9215a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e', 'scsi-SQEMU_QEMU_HARDDISK_600e740f-7698-45cf-9f18-28df3084435e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '600e740f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PPJEoE-t8lH-Lsu9-VCxv-DzG3-SEi9-DpziQD', 'scsi-0QEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc', 'scsi-SQEMU_QEMU_HARDDISK_f8b6a063-90c4-466f-950a-7ec8689e5fcc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:42:56.244976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd', 'dm-uuid-CRYPT-LUKS2-366eda1d300c4ff497bf868d045a2886-SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b577363--2bac--543e--944e--5354861b1af5-osd--block--7b577363--2bac--543e--944e--5354861b1af5', 'dm-uuid-LVM-0VL0CxXxe2vdWsz49rVaxb3uSV9CWoFcSN89ximT6SOMxwvqsIuUyBOeGRYcFBXd'], 'uuids': ['366eda1d-300c-4ff4-97bf-868d045a2886'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f8b6a063', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SN89xi-mT6S-OMxw-vqsI-uUyB-OeGR-YcFBXd']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9XBo1I-CFLx-ADHD-pZVq-BmE6-mdcf-IWW9zX', 'scsi-0QEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0', 'scsi-SQEMU_QEMU_HARDDISK_f960435b-b83d-47c8-ac31-653544f80bd0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f960435b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--df737486--1b51--5b4a--92b8--76d7a8957091-osd--block--df737486--1b51--5b4a--92b8--76d7a8957091']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '677d5586', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1', 'scsi-SQEMU_QEMU_HARDDISK_677d5586-73cd-49dc-a30b-5398ef511889-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL', 'dm-uuid-CRYPT-LUKS2-cbd2394d69724905b52ec3fabde9215a-8xK25y-ZZCd-dpKg-Hxc3-NQuN-izer-GpwRdL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:43:01.523380 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:01.523399 | orchestrator | 2026-02-14 06:43:01.523418 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:43:01.523437 | orchestrator | Saturday 14 February 2026 06:42:57 +0000 (0:00:01.373) 1:06:09.729 ***** 2026-02-14 06:43:01.523456 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:01.523474 | orchestrator | 2026-02-14 06:43:01.523492 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:43:01.523564 | orchestrator | Saturday 14 February 2026 06:42:58 +0000 (0:00:01.501) 1:06:11.230 ***** 2026-02-14 06:43:01.523586 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:01.523605 | orchestrator | 2026-02-14 06:43:01.523622 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:43:01.523638 | orchestrator | Saturday 14 February 2026 06:43:00 +0000 (0:00:01.141) 1:06:12.372 ***** 2026-02-14 06:43:01.523655 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:01.523671 | orchestrator | 2026-02-14 06:43:01.523686 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:43:01.523719 | orchestrator | Saturday 14 February 2026 06:43:01 +0000 (0:00:01.460) 1:06:13.832 ***** 2026-02-14 06:43:44.133157 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133268 | orchestrator | 2026-02-14 06:43:44.133282 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:43:44.133294 | orchestrator | Saturday 14 February 2026 06:43:02 +0000 (0:00:01.129) 1:06:14.961 ***** 2026-02-14 06:43:44.133304 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133314 | orchestrator | 2026-02-14 06:43:44.133324 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:43:44.133334 | orchestrator | Saturday 14 February 2026 06:43:03 +0000 (0:00:01.272) 1:06:16.234 ***** 2026-02-14 06:43:44.133343 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133353 | orchestrator | 2026-02-14 06:43:44.133364 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:43:44.133373 | orchestrator | Saturday 14 February 2026 06:43:05 +0000 (0:00:01.148) 1:06:17.382 ***** 2026-02-14 06:43:44.133384 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-14 06:43:44.133394 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-14 06:43:44.133403 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-14 06:43:44.133413 | orchestrator | 2026-02-14 06:43:44.133423 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:43:44.133432 | orchestrator | Saturday 14 February 2026 06:43:07 +0000 (0:00:02.044) 1:06:19.426 ***** 2026-02-14 06:43:44.133442 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-14 06:43:44.133452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-14 06:43:44.133461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-14 06:43:44.133471 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133480 | orchestrator | 2026-02-14 06:43:44.133489 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:43:44.133499 | orchestrator | Saturday 14 February 2026 06:43:08 +0000 (0:00:01.210) 1:06:20.637 ***** 2026-02-14 06:43:44.133508 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-14 06:43:44.133519 | orchestrator | 2026-02-14 06:43:44.133588 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:43:44.133601 | orchestrator | Saturday 14 February 2026 06:43:09 +0000 (0:00:01.103) 1:06:21.741 ***** 2026-02-14 06:43:44.133611 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133620 | orchestrator | 2026-02-14 06:43:44.133630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:43:44.133663 | orchestrator | Saturday 14 February 2026 06:43:10 +0000 (0:00:01.195) 1:06:22.936 ***** 2026-02-14 06:43:44.133673 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133682 | orchestrator | 2026-02-14 06:43:44.133692 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:43:44.133703 | orchestrator | Saturday 14 February 2026 06:43:11 +0000 (0:00:01.235) 1:06:24.172 ***** 2026-02-14 06:43:44.133714 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133724 | orchestrator | 2026-02-14 06:43:44.133735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:43:44.133746 | orchestrator | Saturday 14 February 2026 06:43:13 +0000 (0:00:01.167) 1:06:25.340 ***** 2026-02-14 06:43:44.133757 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:44.133769 | orchestrator | 2026-02-14 06:43:44.133780 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:43:44.133791 | orchestrator | Saturday 14 February 2026 06:43:14 +0000 (0:00:01.345) 1:06:26.686 ***** 2026-02-14 06:43:44.133802 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:43:44.133813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:43:44.133824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:43:44.133835 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133846 | orchestrator | 2026-02-14 06:43:44.133856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:43:44.133868 | orchestrator | Saturday 14 February 2026 06:43:15 +0000 (0:00:01.453) 1:06:28.139 ***** 2026-02-14 06:43:44.133879 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:43:44.133890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:43:44.133901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:43:44.133911 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133922 | orchestrator | 2026-02-14 06:43:44.133933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:43:44.133944 | orchestrator | Saturday 14 February 2026 06:43:17 +0000 (0:00:01.425) 1:06:29.565 ***** 2026-02-14 06:43:44.133955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:43:44.133966 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:43:44.133977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:43:44.133988 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.133999 | orchestrator | 2026-02-14 06:43:44.134010 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:43:44.134076 | orchestrator | Saturday 14 February 2026 06:43:18 +0000 (0:00:01.397) 1:06:30.963 ***** 2026-02-14 06:43:44.134088 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:44.134099 | orchestrator | 2026-02-14 06:43:44.134108 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:43:44.134117 | orchestrator | Saturday 14 February 2026 06:43:19 +0000 (0:00:01.182) 1:06:32.146 ***** 2026-02-14 06:43:44.134127 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:43:44.134136 | orchestrator | 2026-02-14 06:43:44.134146 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:43:44.134155 | orchestrator | Saturday 14 February 2026 06:43:21 +0000 (0:00:01.348) 1:06:33.494 ***** 2026-02-14 06:43:44.134182 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:43:44.134192 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:43:44.134202 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:43:44.134212 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:43:44.134221 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:43:44.134231 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:43:44.134249 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:43:44.134259 | orchestrator | 2026-02-14 06:43:44.134268 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:43:44.134278 | orchestrator | Saturday 14 February 2026 06:43:23 +0000 (0:00:02.201) 1:06:35.696 ***** 2026-02-14 06:43:44.134287 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:43:44.134297 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:43:44.134306 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:43:44.134316 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:43:44.134325 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-14 06:43:44.134335 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-14 06:43:44.134344 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:43:44.134354 | orchestrator | 2026-02-14 06:43:44.134364 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-14 06:43:44.134373 | orchestrator | Saturday 14 February 2026 06:43:25 +0000 (0:00:02.325) 1:06:38.022 ***** 2026-02-14 06:43:44.134383 | orchestrator | changed: [testbed-node-4] 2026-02-14 06:43:44.134393 | orchestrator | 2026-02-14 06:43:44.134408 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-14 06:43:44.134418 | orchestrator | Saturday 14 February 2026 06:43:27 +0000 (0:00:01.937) 1:06:39.959 ***** 2026-02-14 06:43:44.134427 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:43:44.134437 | orchestrator | 2026-02-14 06:43:44.134447 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-14 06:43:44.134456 | orchestrator | Saturday 14 February 2026 06:43:30 +0000 (0:00:02.448) 1:06:42.407 ***** 2026-02-14 06:43:44.134466 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:43:44.134475 | orchestrator | 2026-02-14 06:43:44.134485 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:43:44.134494 | orchestrator | Saturday 14 February 2026 06:43:32 +0000 (0:00:01.991) 1:06:44.399 ***** 2026-02-14 06:43:44.134504 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-14 06:43:44.134513 | orchestrator | 2026-02-14 06:43:44.134522 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:43:44.134532 | orchestrator | Saturday 14 February 2026 06:43:33 +0000 (0:00:01.133) 1:06:45.532 ***** 2026-02-14 06:43:44.134557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-14 06:43:44.134567 | orchestrator | 2026-02-14 06:43:44.134577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:43:44.134586 | orchestrator | Saturday 14 February 2026 06:43:34 +0000 (0:00:01.105) 1:06:46.637 ***** 2026-02-14 06:43:44.134596 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.134605 | orchestrator | 2026-02-14 06:43:44.134614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:43:44.134624 | orchestrator | Saturday 14 February 2026 06:43:35 +0000 (0:00:01.214) 1:06:47.851 ***** 2026-02-14 06:43:44.134633 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:44.134643 | orchestrator | 2026-02-14 06:43:44.134653 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:43:44.134662 | orchestrator | Saturday 14 February 2026 06:43:37 +0000 (0:00:01.981) 1:06:49.833 ***** 2026-02-14 06:43:44.134672 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:44.134690 | orchestrator | 2026-02-14 06:43:44.134699 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:43:44.134709 | orchestrator | Saturday 14 February 2026 06:43:39 +0000 (0:00:01.545) 1:06:51.379 ***** 2026-02-14 06:43:44.134718 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:43:44.134728 | orchestrator | 2026-02-14 06:43:44.134737 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:43:44.134747 | orchestrator | Saturday 14 February 2026 06:43:40 +0000 (0:00:01.570) 1:06:52.949 ***** 2026-02-14 06:43:44.134757 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.134766 | orchestrator | 2026-02-14 06:43:44.134775 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:43:44.134785 | orchestrator | Saturday 14 February 2026 06:43:41 +0000 (0:00:01.169) 1:06:54.118 ***** 2026-02-14 06:43:44.134794 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.134804 | orchestrator | 2026-02-14 06:43:44.134813 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:43:44.134823 | orchestrator | Saturday 14 February 2026 06:43:42 +0000 (0:00:01.117) 1:06:55.236 ***** 2026-02-14 06:43:44.134833 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:43:44.134842 | orchestrator | 2026-02-14 06:43:44.134851 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:43:44.134868 | orchestrator | Saturday 14 February 2026 06:43:44 +0000 (0:00:01.203) 1:06:56.440 ***** 2026-02-14 06:44:24.738245 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738364 | orchestrator | 2026-02-14 06:44:24.738382 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:44:24.738395 | orchestrator | Saturday 14 February 2026 06:43:45 +0000 (0:00:01.681) 1:06:58.121 ***** 2026-02-14 06:44:24.738406 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738417 | orchestrator | 2026-02-14 06:44:24.738429 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:44:24.738440 | orchestrator | Saturday 14 February 2026 06:43:47 +0000 (0:00:01.619) 1:06:59.741 ***** 2026-02-14 06:44:24.738451 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.738462 | orchestrator | 2026-02-14 06:44:24.738473 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:44:24.738484 | orchestrator | Saturday 14 February 2026 06:43:48 +0000 (0:00:00.835) 1:07:00.576 ***** 2026-02-14 06:44:24.738495 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.738506 | orchestrator | 2026-02-14 06:44:24.738517 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:44:24.738528 | orchestrator | Saturday 14 February 2026 06:43:49 +0000 (0:00:00.762) 1:07:01.338 ***** 2026-02-14 06:44:24.738539 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738549 | orchestrator | 2026-02-14 06:44:24.738561 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:44:24.738625 | orchestrator | Saturday 14 February 2026 06:43:49 +0000 (0:00:00.785) 1:07:02.124 ***** 2026-02-14 06:44:24.738638 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738649 | orchestrator | 2026-02-14 06:44:24.738660 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:44:24.738671 | orchestrator | Saturday 14 February 2026 06:43:50 +0000 (0:00:00.797) 1:07:02.921 ***** 2026-02-14 06:44:24.738682 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738693 | orchestrator | 2026-02-14 06:44:24.738704 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:44:24.738715 | orchestrator | Saturday 14 February 2026 06:43:51 +0000 (0:00:00.787) 1:07:03.709 ***** 2026-02-14 06:44:24.738726 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.738737 | orchestrator | 2026-02-14 06:44:24.738764 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:44:24.738777 | orchestrator | Saturday 14 February 2026 06:43:52 +0000 (0:00:00.789) 1:07:04.498 ***** 2026-02-14 06:44:24.738790 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.738827 | orchestrator | 2026-02-14 06:44:24.738840 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:44:24.738853 | orchestrator | Saturday 14 February 2026 06:43:52 +0000 (0:00:00.828) 1:07:05.327 ***** 2026-02-14 06:44:24.738865 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.738878 | orchestrator | 2026-02-14 06:44:24.738890 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:44:24.738903 | orchestrator | Saturday 14 February 2026 06:43:53 +0000 (0:00:00.858) 1:07:06.185 ***** 2026-02-14 06:44:24.738915 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738926 | orchestrator | 2026-02-14 06:44:24.738937 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:44:24.738947 | orchestrator | Saturday 14 February 2026 06:43:54 +0000 (0:00:00.864) 1:07:07.050 ***** 2026-02-14 06:44:24.738958 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.738969 | orchestrator | 2026-02-14 06:44:24.738980 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:44:24.738991 | orchestrator | Saturday 14 February 2026 06:43:55 +0000 (0:00:00.847) 1:07:07.897 ***** 2026-02-14 06:44:24.739001 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739012 | orchestrator | 2026-02-14 06:44:24.739023 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:44:24.739034 | orchestrator | Saturday 14 February 2026 06:43:56 +0000 (0:00:00.771) 1:07:08.669 ***** 2026-02-14 06:44:24.739045 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739056 | orchestrator | 2026-02-14 06:44:24.739067 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:44:24.739077 | orchestrator | Saturday 14 February 2026 06:43:57 +0000 (0:00:00.787) 1:07:09.457 ***** 2026-02-14 06:44:24.739088 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739099 | orchestrator | 2026-02-14 06:44:24.739110 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:44:24.739121 | orchestrator | Saturday 14 February 2026 06:43:58 +0000 (0:00:00.894) 1:07:10.351 ***** 2026-02-14 06:44:24.739131 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739142 | orchestrator | 2026-02-14 06:44:24.739153 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:44:24.739164 | orchestrator | Saturday 14 February 2026 06:43:58 +0000 (0:00:00.763) 1:07:11.115 ***** 2026-02-14 06:44:24.739175 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739186 | orchestrator | 2026-02-14 06:44:24.739197 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:44:24.739208 | orchestrator | Saturday 14 February 2026 06:43:59 +0000 (0:00:00.777) 1:07:11.892 ***** 2026-02-14 06:44:24.739219 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739230 | orchestrator | 2026-02-14 06:44:24.739241 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:44:24.739252 | orchestrator | Saturday 14 February 2026 06:44:00 +0000 (0:00:00.805) 1:07:12.698 ***** 2026-02-14 06:44:24.739263 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739273 | orchestrator | 2026-02-14 06:44:24.739284 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:44:24.739296 | orchestrator | Saturday 14 February 2026 06:44:01 +0000 (0:00:00.764) 1:07:13.463 ***** 2026-02-14 06:44:24.739307 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739317 | orchestrator | 2026-02-14 06:44:24.739328 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:44:24.739339 | orchestrator | Saturday 14 February 2026 06:44:01 +0000 (0:00:00.771) 1:07:14.234 ***** 2026-02-14 06:44:24.739350 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739361 | orchestrator | 2026-02-14 06:44:24.739391 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:44:24.739403 | orchestrator | Saturday 14 February 2026 06:44:02 +0000 (0:00:00.827) 1:07:15.061 ***** 2026-02-14 06:44:24.739414 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739433 | orchestrator | 2026-02-14 06:44:24.739444 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:44:24.739455 | orchestrator | Saturday 14 February 2026 06:44:03 +0000 (0:00:00.792) 1:07:15.854 ***** 2026-02-14 06:44:24.739465 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739476 | orchestrator | 2026-02-14 06:44:24.739487 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:44:24.739498 | orchestrator | Saturday 14 February 2026 06:44:04 +0000 (0:00:00.791) 1:07:16.646 ***** 2026-02-14 06:44:24.739508 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739519 | orchestrator | 2026-02-14 06:44:24.739530 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:44:24.739540 | orchestrator | Saturday 14 February 2026 06:44:05 +0000 (0:00:00.789) 1:07:17.435 ***** 2026-02-14 06:44:24.739551 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.739562 | orchestrator | 2026-02-14 06:44:24.739589 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:44:24.739600 | orchestrator | Saturday 14 February 2026 06:44:06 +0000 (0:00:01.531) 1:07:18.966 ***** 2026-02-14 06:44:24.739611 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.739622 | orchestrator | 2026-02-14 06:44:24.739633 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:44:24.739644 | orchestrator | Saturday 14 February 2026 06:44:08 +0000 (0:00:01.912) 1:07:20.879 ***** 2026-02-14 06:44:24.739654 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-14 06:44:24.739666 | orchestrator | 2026-02-14 06:44:24.739678 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:44:24.739688 | orchestrator | Saturday 14 February 2026 06:44:09 +0000 (0:00:01.298) 1:07:22.177 ***** 2026-02-14 06:44:24.739699 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739710 | orchestrator | 2026-02-14 06:44:24.739726 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:44:24.739737 | orchestrator | Saturday 14 February 2026 06:44:10 +0000 (0:00:01.140) 1:07:23.318 ***** 2026-02-14 06:44:24.739748 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739758 | orchestrator | 2026-02-14 06:44:24.739769 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:44:24.739780 | orchestrator | Saturday 14 February 2026 06:44:12 +0000 (0:00:01.156) 1:07:24.474 ***** 2026-02-14 06:44:24.739790 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:44:24.739801 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:44:24.739812 | orchestrator | 2026-02-14 06:44:24.739822 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:44:24.739833 | orchestrator | Saturday 14 February 2026 06:44:13 +0000 (0:00:01.841) 1:07:26.316 ***** 2026-02-14 06:44:24.739844 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.739855 | orchestrator | 2026-02-14 06:44:24.739866 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:44:24.739876 | orchestrator | Saturday 14 February 2026 06:44:15 +0000 (0:00:01.558) 1:07:27.875 ***** 2026-02-14 06:44:24.739887 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739898 | orchestrator | 2026-02-14 06:44:24.739908 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:44:24.739919 | orchestrator | Saturday 14 February 2026 06:44:16 +0000 (0:00:01.144) 1:07:29.019 ***** 2026-02-14 06:44:24.739930 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739940 | orchestrator | 2026-02-14 06:44:24.739951 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:44:24.739962 | orchestrator | Saturday 14 February 2026 06:44:17 +0000 (0:00:00.808) 1:07:29.828 ***** 2026-02-14 06:44:24.739972 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.739983 | orchestrator | 2026-02-14 06:44:24.739994 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:44:24.740012 | orchestrator | Saturday 14 February 2026 06:44:18 +0000 (0:00:00.798) 1:07:30.627 ***** 2026-02-14 06:44:24.740024 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-14 06:44:24.740034 | orchestrator | 2026-02-14 06:44:24.740045 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:44:24.740056 | orchestrator | Saturday 14 February 2026 06:44:19 +0000 (0:00:01.192) 1:07:31.820 ***** 2026-02-14 06:44:24.740066 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:44:24.740077 | orchestrator | 2026-02-14 06:44:24.740088 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:44:24.740098 | orchestrator | Saturday 14 February 2026 06:44:21 +0000 (0:00:01.747) 1:07:33.567 ***** 2026-02-14 06:44:24.740109 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:44:24.740120 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:44:24.740130 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:44:24.740141 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.740151 | orchestrator | 2026-02-14 06:44:24.740162 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:44:24.740173 | orchestrator | Saturday 14 February 2026 06:44:22 +0000 (0:00:01.158) 1:07:34.725 ***** 2026-02-14 06:44:24.740184 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.740194 | orchestrator | 2026-02-14 06:44:24.740205 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:44:24.740216 | orchestrator | Saturday 14 February 2026 06:44:23 +0000 (0:00:01.144) 1:07:35.870 ***** 2026-02-14 06:44:24.740226 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:44:24.740237 | orchestrator | 2026-02-14 06:44:24.740254 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:45:07.456861 | orchestrator | Saturday 14 February 2026 06:44:24 +0000 (0:00:01.181) 1:07:37.052 ***** 2026-02-14 06:45:07.456945 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.456953 | orchestrator | 2026-02-14 06:45:07.456960 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:45:07.456966 | orchestrator | Saturday 14 February 2026 06:44:25 +0000 (0:00:01.151) 1:07:38.204 ***** 2026-02-14 06:45:07.456971 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.456976 | orchestrator | 2026-02-14 06:45:07.456982 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:45:07.456987 | orchestrator | Saturday 14 February 2026 06:44:27 +0000 (0:00:01.152) 1:07:39.356 ***** 2026-02-14 06:45:07.457008 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457014 | orchestrator | 2026-02-14 06:45:07.457019 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:45:07.457024 | orchestrator | Saturday 14 February 2026 06:44:27 +0000 (0:00:00.818) 1:07:40.175 ***** 2026-02-14 06:45:07.457030 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:07.457036 | orchestrator | 2026-02-14 06:45:07.457041 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:45:07.457047 | orchestrator | Saturday 14 February 2026 06:44:30 +0000 (0:00:02.172) 1:07:42.348 ***** 2026-02-14 06:45:07.457052 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:07.457057 | orchestrator | 2026-02-14 06:45:07.457063 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:45:07.457068 | orchestrator | Saturday 14 February 2026 06:44:30 +0000 (0:00:00.765) 1:07:43.113 ***** 2026-02-14 06:45:07.457073 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-14 06:45:07.457078 | orchestrator | 2026-02-14 06:45:07.457084 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:45:07.457089 | orchestrator | Saturday 14 February 2026 06:44:31 +0000 (0:00:01.114) 1:07:44.228 ***** 2026-02-14 06:45:07.457113 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457118 | orchestrator | 2026-02-14 06:45:07.457135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:45:07.457140 | orchestrator | Saturday 14 February 2026 06:44:33 +0000 (0:00:01.215) 1:07:45.443 ***** 2026-02-14 06:45:07.457145 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457190 | orchestrator | 2026-02-14 06:45:07.457196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:45:07.457201 | orchestrator | Saturday 14 February 2026 06:44:34 +0000 (0:00:01.138) 1:07:46.581 ***** 2026-02-14 06:45:07.457206 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457211 | orchestrator | 2026-02-14 06:45:07.457216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:45:07.457221 | orchestrator | Saturday 14 February 2026 06:44:35 +0000 (0:00:01.154) 1:07:47.736 ***** 2026-02-14 06:45:07.457226 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457231 | orchestrator | 2026-02-14 06:45:07.457236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:45:07.457241 | orchestrator | Saturday 14 February 2026 06:44:36 +0000 (0:00:01.115) 1:07:48.851 ***** 2026-02-14 06:45:07.457246 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457251 | orchestrator | 2026-02-14 06:45:07.457256 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:45:07.457261 | orchestrator | Saturday 14 February 2026 06:44:37 +0000 (0:00:01.160) 1:07:50.012 ***** 2026-02-14 06:45:07.457267 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457272 | orchestrator | 2026-02-14 06:45:07.457277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:45:07.457282 | orchestrator | Saturday 14 February 2026 06:44:38 +0000 (0:00:01.264) 1:07:51.277 ***** 2026-02-14 06:45:07.457287 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457292 | orchestrator | 2026-02-14 06:45:07.457297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:45:07.457302 | orchestrator | Saturday 14 February 2026 06:44:40 +0000 (0:00:01.186) 1:07:52.464 ***** 2026-02-14 06:45:07.457307 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457312 | orchestrator | 2026-02-14 06:45:07.457317 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:45:07.457322 | orchestrator | Saturday 14 February 2026 06:44:41 +0000 (0:00:01.236) 1:07:53.701 ***** 2026-02-14 06:45:07.457327 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:07.457332 | orchestrator | 2026-02-14 06:45:07.457337 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:45:07.457342 | orchestrator | Saturday 14 February 2026 06:44:42 +0000 (0:00:00.798) 1:07:54.499 ***** 2026-02-14 06:45:07.457348 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-14 06:45:07.457354 | orchestrator | 2026-02-14 06:45:07.457359 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:45:07.457364 | orchestrator | Saturday 14 February 2026 06:44:43 +0000 (0:00:01.118) 1:07:55.618 ***** 2026-02-14 06:45:07.457369 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-14 06:45:07.457375 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-14 06:45:07.457380 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-14 06:45:07.457385 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-14 06:45:07.457390 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-14 06:45:07.457395 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-14 06:45:07.457400 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-14 06:45:07.457405 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:45:07.457411 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:45:07.457416 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:45:07.457426 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:45:07.457444 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:45:07.457449 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:45:07.457455 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:45:07.457460 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-14 06:45:07.457465 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-14 06:45:07.457470 | orchestrator | 2026-02-14 06:45:07.457475 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:45:07.457480 | orchestrator | Saturday 14 February 2026 06:44:49 +0000 (0:00:06.137) 1:08:01.755 ***** 2026-02-14 06:45:07.457485 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-14 06:45:07.457490 | orchestrator | 2026-02-14 06:45:07.457495 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:45:07.457500 | orchestrator | Saturday 14 February 2026 06:44:50 +0000 (0:00:01.180) 1:08:02.936 ***** 2026-02-14 06:45:07.457505 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:45:07.457512 | orchestrator | 2026-02-14 06:45:07.457517 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:45:07.457522 | orchestrator | Saturday 14 February 2026 06:44:52 +0000 (0:00:01.569) 1:08:04.505 ***** 2026-02-14 06:45:07.457527 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:45:07.457532 | orchestrator | 2026-02-14 06:45:07.457537 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:45:07.457542 | orchestrator | Saturday 14 February 2026 06:44:53 +0000 (0:00:01.651) 1:08:06.157 ***** 2026-02-14 06:45:07.457547 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457552 | orchestrator | 2026-02-14 06:45:07.457560 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:45:07.457565 | orchestrator | Saturday 14 February 2026 06:44:54 +0000 (0:00:00.815) 1:08:06.973 ***** 2026-02-14 06:45:07.457570 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457576 | orchestrator | 2026-02-14 06:45:07.457581 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:45:07.457586 | orchestrator | Saturday 14 February 2026 06:44:55 +0000 (0:00:00.847) 1:08:07.820 ***** 2026-02-14 06:45:07.457591 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457596 | orchestrator | 2026-02-14 06:45:07.457601 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:45:07.457618 | orchestrator | Saturday 14 February 2026 06:44:56 +0000 (0:00:00.777) 1:08:08.598 ***** 2026-02-14 06:45:07.457624 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457629 | orchestrator | 2026-02-14 06:45:07.457634 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:45:07.457639 | orchestrator | Saturday 14 February 2026 06:44:57 +0000 (0:00:00.769) 1:08:09.367 ***** 2026-02-14 06:45:07.457644 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457649 | orchestrator | 2026-02-14 06:45:07.457654 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:45:07.457659 | orchestrator | Saturday 14 February 2026 06:44:57 +0000 (0:00:00.789) 1:08:10.156 ***** 2026-02-14 06:45:07.457665 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457670 | orchestrator | 2026-02-14 06:45:07.457675 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:45:07.457680 | orchestrator | Saturday 14 February 2026 06:44:58 +0000 (0:00:00.881) 1:08:11.038 ***** 2026-02-14 06:45:07.457685 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457694 | orchestrator | 2026-02-14 06:45:07.457699 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:45:07.457704 | orchestrator | Saturday 14 February 2026 06:44:59 +0000 (0:00:00.792) 1:08:11.831 ***** 2026-02-14 06:45:07.457709 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457714 | orchestrator | 2026-02-14 06:45:07.457719 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:45:07.457724 | orchestrator | Saturday 14 February 2026 06:45:00 +0000 (0:00:00.796) 1:08:12.628 ***** 2026-02-14 06:45:07.457729 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457734 | orchestrator | 2026-02-14 06:45:07.457740 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:45:07.457745 | orchestrator | Saturday 14 February 2026 06:45:01 +0000 (0:00:00.784) 1:08:13.413 ***** 2026-02-14 06:45:07.457750 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457755 | orchestrator | 2026-02-14 06:45:07.457760 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:45:07.457765 | orchestrator | Saturday 14 February 2026 06:45:01 +0000 (0:00:00.791) 1:08:14.205 ***** 2026-02-14 06:45:07.457770 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:07.457775 | orchestrator | 2026-02-14 06:45:07.457780 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:45:07.457785 | orchestrator | Saturday 14 February 2026 06:45:02 +0000 (0:00:00.798) 1:08:15.004 ***** 2026-02-14 06:45:07.457790 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:45:07.457795 | orchestrator | 2026-02-14 06:45:07.457800 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:45:07.457805 | orchestrator | Saturday 14 February 2026 06:45:06 +0000 (0:00:03.930) 1:08:18.934 ***** 2026-02-14 06:45:07.457810 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:45:07.457815 | orchestrator | 2026-02-14 06:45:07.457824 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:45:48.736203 | orchestrator | Saturday 14 February 2026 06:45:07 +0000 (0:00:00.835) 1:08:19.770 ***** 2026-02-14 06:45:48.736349 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-14 06:45:48.736373 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-14 06:45:48.736388 | orchestrator | 2026-02-14 06:45:48.736407 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:45:48.736426 | orchestrator | Saturday 14 February 2026 06:45:12 +0000 (0:00:04.869) 1:08:24.640 ***** 2026-02-14 06:45:48.736444 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.736457 | orchestrator | 2026-02-14 06:45:48.736471 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:45:48.736490 | orchestrator | Saturday 14 February 2026 06:45:13 +0000 (0:00:00.876) 1:08:25.516 ***** 2026-02-14 06:45:48.736509 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.736528 | orchestrator | 2026-02-14 06:45:48.736547 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:45:48.736568 | orchestrator | Saturday 14 February 2026 06:45:13 +0000 (0:00:00.804) 1:08:26.321 ***** 2026-02-14 06:45:48.736588 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.736632 | orchestrator | 2026-02-14 06:45:48.736701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:45:48.736715 | orchestrator | Saturday 14 February 2026 06:45:14 +0000 (0:00:00.855) 1:08:27.176 ***** 2026-02-14 06:45:48.736727 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.736739 | orchestrator | 2026-02-14 06:45:48.736752 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:45:48.736766 | orchestrator | Saturday 14 February 2026 06:45:15 +0000 (0:00:00.831) 1:08:28.008 ***** 2026-02-14 06:45:48.736778 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.736790 | orchestrator | 2026-02-14 06:45:48.736803 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:45:48.736815 | orchestrator | Saturday 14 February 2026 06:45:16 +0000 (0:00:00.826) 1:08:28.834 ***** 2026-02-14 06:45:48.736829 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:48.736851 | orchestrator | 2026-02-14 06:45:48.736872 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:45:48.736886 | orchestrator | Saturday 14 February 2026 06:45:17 +0000 (0:00:00.892) 1:08:29.727 ***** 2026-02-14 06:45:48.736955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:45:48.736982 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:45:48.737003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:45:48.737021 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.737039 | orchestrator | 2026-02-14 06:45:48.737059 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:45:48.737078 | orchestrator | Saturday 14 February 2026 06:45:18 +0000 (0:00:01.144) 1:08:30.871 ***** 2026-02-14 06:45:48.737097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:45:48.737118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:45:48.737188 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:45:48.737210 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.737230 | orchestrator | 2026-02-14 06:45:48.737249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:45:48.737270 | orchestrator | Saturday 14 February 2026 06:45:19 +0000 (0:00:01.096) 1:08:31.968 ***** 2026-02-14 06:45:48.737287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-14 06:45:48.737305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-14 06:45:48.737324 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-14 06:45:48.737343 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.737362 | orchestrator | 2026-02-14 06:45:48.737381 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:45:48.737401 | orchestrator | Saturday 14 February 2026 06:45:20 +0000 (0:00:01.062) 1:08:33.030 ***** 2026-02-14 06:45:48.737419 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:48.737439 | orchestrator | 2026-02-14 06:45:48.737458 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:45:48.737477 | orchestrator | Saturday 14 February 2026 06:45:21 +0000 (0:00:00.840) 1:08:33.871 ***** 2026-02-14 06:45:48.737496 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-14 06:45:48.737516 | orchestrator | 2026-02-14 06:45:48.737535 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:45:48.737555 | orchestrator | Saturday 14 February 2026 06:45:22 +0000 (0:00:01.021) 1:08:34.893 ***** 2026-02-14 06:45:48.737567 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:48.737578 | orchestrator | 2026-02-14 06:45:48.737588 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-14 06:45:48.737599 | orchestrator | Saturday 14 February 2026 06:45:24 +0000 (0:00:01.543) 1:08:36.436 ***** 2026-02-14 06:45:48.737609 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-14 06:45:48.737627 | orchestrator | 2026-02-14 06:45:48.737696 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:45:48.737734 | orchestrator | Saturday 14 February 2026 06:45:25 +0000 (0:00:01.136) 1:08:37.572 ***** 2026-02-14 06:45:48.737754 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:45:48.737773 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:45:48.737792 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:45:48.737813 | orchestrator | 2026-02-14 06:45:48.737824 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:45:48.737834 | orchestrator | Saturday 14 February 2026 06:45:28 +0000 (0:00:03.270) 1:08:40.843 ***** 2026-02-14 06:45:48.737845 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-14 06:45:48.737856 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-14 06:45:48.737866 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:48.737880 | orchestrator | 2026-02-14 06:45:48.737898 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-14 06:45:48.737918 | orchestrator | Saturday 14 February 2026 06:45:30 +0000 (0:00:01.978) 1:08:42.821 ***** 2026-02-14 06:45:48.737931 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.737942 | orchestrator | 2026-02-14 06:45:48.737952 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-14 06:45:48.737963 | orchestrator | Saturday 14 February 2026 06:45:31 +0000 (0:00:00.759) 1:08:43.581 ***** 2026-02-14 06:45:48.737975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-14 06:45:48.737995 | orchestrator | 2026-02-14 06:45:48.738013 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-14 06:45:48.738101 | orchestrator | Saturday 14 February 2026 06:45:32 +0000 (0:00:01.152) 1:08:44.733 ***** 2026-02-14 06:45:48.738130 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:45:48.738151 | orchestrator | 2026-02-14 06:45:48.738168 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-14 06:45:48.738215 | orchestrator | Saturday 14 February 2026 06:45:34 +0000 (0:00:01.608) 1:08:46.341 ***** 2026-02-14 06:45:48.738226 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:45:48.738237 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 06:45:48.738248 | orchestrator | 2026-02-14 06:45:48.738262 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:45:48.738281 | orchestrator | Saturday 14 February 2026 06:45:39 +0000 (0:00:05.241) 1:08:51.583 ***** 2026-02-14 06:45:48.738292 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:45:48.738302 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:45:48.738313 | orchestrator | 2026-02-14 06:45:48.738324 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:45:48.738335 | orchestrator | Saturday 14 February 2026 06:45:42 +0000 (0:00:03.250) 1:08:54.834 ***** 2026-02-14 06:45:48.738345 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-14 06:45:48.738356 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:45:48.738367 | orchestrator | 2026-02-14 06:45:48.738378 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-14 06:45:48.738388 | orchestrator | Saturday 14 February 2026 06:45:44 +0000 (0:00:01.609) 1:08:56.443 ***** 2026-02-14 06:45:48.738399 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-14 06:45:48.738410 | orchestrator | 2026-02-14 06:45:48.738420 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-14 06:45:48.738431 | orchestrator | Saturday 14 February 2026 06:45:45 +0000 (0:00:01.326) 1:08:57.770 ***** 2026-02-14 06:45:48.738442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738508 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:45:48.738518 | orchestrator | 2026-02-14 06:45:48.738529 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-14 06:45:48.738540 | orchestrator | Saturday 14 February 2026 06:45:47 +0000 (0:00:01.620) 1:08:59.390 ***** 2026-02-14 06:45:48.738550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:45:48.738596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:46:54.347402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:46:54.347549 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:46:54.347580 | orchestrator | 2026-02-14 06:46:54.347600 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-14 06:46:54.347621 | orchestrator | Saturday 14 February 2026 06:45:48 +0000 (0:00:01.655) 1:09:01.046 ***** 2026-02-14 06:46:54.347640 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:46:54.347661 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:46:54.347679 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:46:54.347764 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:46:54.347778 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:46:54.347789 | orchestrator | 2026-02-14 06:46:54.347801 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-14 06:46:54.347829 | orchestrator | Saturday 14 February 2026 06:46:19 +0000 (0:00:30.338) 1:09:31.385 ***** 2026-02-14 06:46:54.347840 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:46:54.347851 | orchestrator | 2026-02-14 06:46:54.347862 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-14 06:46:54.347873 | orchestrator | Saturday 14 February 2026 06:46:19 +0000 (0:00:00.774) 1:09:32.159 ***** 2026-02-14 06:46:54.347884 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:46:54.347895 | orchestrator | 2026-02-14 06:46:54.347908 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-14 06:46:54.347921 | orchestrator | Saturday 14 February 2026 06:46:20 +0000 (0:00:00.779) 1:09:32.938 ***** 2026-02-14 06:46:54.347965 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-14 06:46:54.347985 | orchestrator | 2026-02-14 06:46:54.348000 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-14 06:46:54.348013 | orchestrator | Saturday 14 February 2026 06:46:21 +0000 (0:00:01.138) 1:09:34.077 ***** 2026-02-14 06:46:54.348026 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-14 06:46:54.348038 | orchestrator | 2026-02-14 06:46:54.348051 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-14 06:46:54.348064 | orchestrator | Saturday 14 February 2026 06:46:22 +0000 (0:00:01.123) 1:09:35.201 ***** 2026-02-14 06:46:54.348076 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:46:54.348090 | orchestrator | 2026-02-14 06:46:54.348104 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-14 06:46:54.348116 | orchestrator | Saturday 14 February 2026 06:46:24 +0000 (0:00:02.067) 1:09:37.269 ***** 2026-02-14 06:46:54.348129 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:46:54.348141 | orchestrator | 2026-02-14 06:46:54.348154 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-14 06:46:54.348166 | orchestrator | Saturday 14 February 2026 06:46:26 +0000 (0:00:01.928) 1:09:39.198 ***** 2026-02-14 06:46:54.348178 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:46:54.348191 | orchestrator | 2026-02-14 06:46:54.348203 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-14 06:46:54.348216 | orchestrator | Saturday 14 February 2026 06:46:29 +0000 (0:00:02.262) 1:09:41.460 ***** 2026-02-14 06:46:54.348229 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-14 06:46:54.348241 | orchestrator | 2026-02-14 06:46:54.348253 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-14 06:46:54.348266 | orchestrator | 2026-02-14 06:46:54.348277 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:46:54.348287 | orchestrator | Saturday 14 February 2026 06:46:32 +0000 (0:00:03.022) 1:09:44.482 ***** 2026-02-14 06:46:54.348298 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-14 06:46:54.348309 | orchestrator | 2026-02-14 06:46:54.348319 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-14 06:46:54.348330 | orchestrator | Saturday 14 February 2026 06:46:33 +0000 (0:00:01.163) 1:09:45.646 ***** 2026-02-14 06:46:54.348341 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348351 | orchestrator | 2026-02-14 06:46:54.348362 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-14 06:46:54.348373 | orchestrator | Saturday 14 February 2026 06:46:34 +0000 (0:00:01.517) 1:09:47.163 ***** 2026-02-14 06:46:54.348384 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348394 | orchestrator | 2026-02-14 06:46:54.348405 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:46:54.348416 | orchestrator | Saturday 14 February 2026 06:46:35 +0000 (0:00:01.161) 1:09:48.325 ***** 2026-02-14 06:46:54.348427 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348438 | orchestrator | 2026-02-14 06:46:54.348449 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:46:54.348460 | orchestrator | Saturday 14 February 2026 06:46:37 +0000 (0:00:01.437) 1:09:49.763 ***** 2026-02-14 06:46:54.348470 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348481 | orchestrator | 2026-02-14 06:46:54.348513 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-14 06:46:54.348525 | orchestrator | Saturday 14 February 2026 06:46:38 +0000 (0:00:01.189) 1:09:50.953 ***** 2026-02-14 06:46:54.348536 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348547 | orchestrator | 2026-02-14 06:46:54.348558 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-14 06:46:54.348568 | orchestrator | Saturday 14 February 2026 06:46:39 +0000 (0:00:01.134) 1:09:52.087 ***** 2026-02-14 06:46:54.348590 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348609 | orchestrator | 2026-02-14 06:46:54.348635 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-14 06:46:54.348654 | orchestrator | Saturday 14 February 2026 06:46:41 +0000 (0:00:01.575) 1:09:53.663 ***** 2026-02-14 06:46:54.348671 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:46:54.348711 | orchestrator | 2026-02-14 06:46:54.348731 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-14 06:46:54.348748 | orchestrator | Saturday 14 February 2026 06:46:42 +0000 (0:00:01.163) 1:09:54.826 ***** 2026-02-14 06:46:54.348765 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348781 | orchestrator | 2026-02-14 06:46:54.348799 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-14 06:46:54.348816 | orchestrator | Saturday 14 February 2026 06:46:43 +0000 (0:00:01.129) 1:09:55.956 ***** 2026-02-14 06:46:54.348834 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:46:54.348851 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:46:54.348870 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:46:54.348889 | orchestrator | 2026-02-14 06:46:54.348907 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-14 06:46:54.348935 | orchestrator | Saturday 14 February 2026 06:46:45 +0000 (0:00:02.210) 1:09:58.166 ***** 2026-02-14 06:46:54.348954 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:46:54.348965 | orchestrator | 2026-02-14 06:46:54.348976 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-14 06:46:54.348987 | orchestrator | Saturday 14 February 2026 06:46:47 +0000 (0:00:01.269) 1:09:59.436 ***** 2026-02-14 06:46:54.348998 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:46:54.349008 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:46:54.349019 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:46:54.349029 | orchestrator | 2026-02-14 06:46:54.349040 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-14 06:46:54.349051 | orchestrator | Saturday 14 February 2026 06:46:50 +0000 (0:00:02.939) 1:10:02.376 ***** 2026-02-14 06:46:54.349061 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:46:54.349073 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:46:54.349083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:46:54.349102 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:46:54.349113 | orchestrator | 2026-02-14 06:46:54.349124 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-14 06:46:54.349135 | orchestrator | Saturday 14 February 2026 06:46:51 +0000 (0:00:01.410) 1:10:03.786 ***** 2026-02-14 06:46:54.349148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-14 06:46:54.349162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-14 06:46:54.349173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-14 06:46:54.349184 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:46:54.349195 | orchestrator | 2026-02-14 06:46:54.349206 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-14 06:46:54.349227 | orchestrator | Saturday 14 February 2026 06:46:53 +0000 (0:00:01.683) 1:10:05.470 ***** 2026-02-14 06:46:54.349240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:46:54.349265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:13.482368 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:13.482487 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.482504 | orchestrator | 2026-02-14 06:47:13.482517 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-14 06:47:13.482530 | orchestrator | Saturday 14 February 2026 06:46:54 +0000 (0:00:01.189) 1:10:06.660 ***** 2026-02-14 06:47:13.482560 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fcade5e8eca4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-14 06:46:47.722293', 'end': '2026-02-14 06:46:47.766574', 'delta': '0:00:00.044281', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcade5e8eca4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-14 06:47:13.482576 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b8937503c016', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-14 06:46:48.285024', 'end': '2026-02-14 06:46:48.322246', 'delta': '0:00:00.037222', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8937503c016'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-14 06:47:13.482588 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bc1e9cbf1ddd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-14 06:46:48.842899', 'end': '2026-02-14 06:46:48.888806', 'delta': '0:00:00.045907', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc1e9cbf1ddd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-14 06:47:13.482622 | orchestrator | 2026-02-14 06:47:13.482634 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-14 06:47:13.482645 | orchestrator | Saturday 14 February 2026 06:46:55 +0000 (0:00:01.246) 1:10:07.906 ***** 2026-02-14 06:47:13.482657 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.482669 | orchestrator | 2026-02-14 06:47:13.482680 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-14 06:47:13.482690 | orchestrator | Saturday 14 February 2026 06:46:56 +0000 (0:00:01.267) 1:10:09.174 ***** 2026-02-14 06:47:13.482738 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.482750 | orchestrator | 2026-02-14 06:47:13.482761 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-14 06:47:13.482772 | orchestrator | Saturday 14 February 2026 06:46:58 +0000 (0:00:01.274) 1:10:10.448 ***** 2026-02-14 06:47:13.482783 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.482794 | orchestrator | 2026-02-14 06:47:13.482805 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-14 06:47:13.482815 | orchestrator | Saturday 14 February 2026 06:46:59 +0000 (0:00:01.205) 1:10:11.654 ***** 2026-02-14 06:47:13.482826 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-14 06:47:13.482837 | orchestrator | 2026-02-14 06:47:13.482848 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:47:13.482859 | orchestrator | Saturday 14 February 2026 06:47:01 +0000 (0:00:02.040) 1:10:13.695 ***** 2026-02-14 06:47:13.482869 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.482881 | orchestrator | 2026-02-14 06:47:13.482892 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-14 06:47:13.482905 | orchestrator | Saturday 14 February 2026 06:47:02 +0000 (0:00:01.191) 1:10:14.886 ***** 2026-02-14 06:47:13.482935 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.482948 | orchestrator | 2026-02-14 06:47:13.482960 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-14 06:47:13.482974 | orchestrator | Saturday 14 February 2026 06:47:03 +0000 (0:00:01.144) 1:10:16.030 ***** 2026-02-14 06:47:13.482986 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.482998 | orchestrator | 2026-02-14 06:47:13.483010 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-14 06:47:13.483022 | orchestrator | Saturday 14 February 2026 06:47:04 +0000 (0:00:01.245) 1:10:17.276 ***** 2026-02-14 06:47:13.483035 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.483047 | orchestrator | 2026-02-14 06:47:13.483060 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-14 06:47:13.483072 | orchestrator | Saturday 14 February 2026 06:47:06 +0000 (0:00:01.228) 1:10:18.505 ***** 2026-02-14 06:47:13.483084 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.483096 | orchestrator | 2026-02-14 06:47:13.483109 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-14 06:47:13.483120 | orchestrator | Saturday 14 February 2026 06:47:07 +0000 (0:00:01.159) 1:10:19.665 ***** 2026-02-14 06:47:13.483133 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.483146 | orchestrator | 2026-02-14 06:47:13.483158 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-14 06:47:13.483171 | orchestrator | Saturday 14 February 2026 06:47:08 +0000 (0:00:01.213) 1:10:20.879 ***** 2026-02-14 06:47:13.483184 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.483196 | orchestrator | 2026-02-14 06:47:13.483207 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-14 06:47:13.483217 | orchestrator | Saturday 14 February 2026 06:47:09 +0000 (0:00:01.130) 1:10:22.009 ***** 2026-02-14 06:47:13.483228 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.483239 | orchestrator | 2026-02-14 06:47:13.483255 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-14 06:47:13.483267 | orchestrator | Saturday 14 February 2026 06:47:10 +0000 (0:00:01.192) 1:10:23.202 ***** 2026-02-14 06:47:13.483286 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:13.483297 | orchestrator | 2026-02-14 06:47:13.483308 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-14 06:47:13.483320 | orchestrator | Saturday 14 February 2026 06:47:12 +0000 (0:00:01.131) 1:10:24.334 ***** 2026-02-14 06:47:13.483331 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:13.483342 | orchestrator | 2026-02-14 06:47:13.483353 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-14 06:47:13.483363 | orchestrator | Saturday 14 February 2026 06:47:13 +0000 (0:00:01.181) 1:10:25.515 ***** 2026-02-14 06:47:13.483375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:13.483388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}})  2026-02-14 06:47:13.483401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:47:13.483422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}})  2026-02-14 06:47:14.646945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-14 06:47:14.647088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}})  2026-02-14 06:47:14.647132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}})  2026-02-14 06:47:14.647140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-14 06:47:14.647168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-14 06:47:14.647185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-14 06:47:14.855672 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:14.855786 | orchestrator | 2026-02-14 06:47:14.855799 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-14 06:47:14.855829 | orchestrator | Saturday 14 February 2026 06:47:14 +0000 (0:00:01.451) 1:10:26.966 ***** 2026-02-14 06:47:14.855841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7', 'dm-uuid-LVM-MtrIT20WffpmoZtgfeTXRFdMHN6P3sAdBjy5doWEhe9rKv9L584cW3XE9oTwvrjF'], 'uuids': ['d1275021-b819-484f-a475-f1a37389bb5c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67', 'scsi-SQEMU_QEMU_HARDDISK_43152e32-b25a-4e6e-b6b7-c3272099ce67'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43152e32', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-56EAYM-xHsu-7hCn-RY2l-0van-u71J-PPT3Ej', 'scsi-0QEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48', 'scsi-SQEMU_QEMU_HARDDISK_89ffb490-ef56-465e-9c2a-8772cc279d48'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-14-02-18-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl', 'dm-uuid-CRYPT-LUKS2-f72393e18a524b3b834b9c577813242e-ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:14.855994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1745485d--ab31--507e--930d--8d3ce82a0691-osd--block--1745485d--ab31--507e--930d--8d3ce82a0691', 'dm-uuid-LVM-XF74CRGH0USDiTPtHNxBQbnIHrjKBwEGozNSSmTzZ40xZxDrUnqvt7q7MTHzgzhl'], 'uuids': ['f72393e1-8a52-4b3b-834b-9c577813242e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '89ffb490', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ozNSSm-TzZ4-0xZx-DrUn-qvt7-q7MT-Hzgzhl']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5s32D9-BYka-Bj8X-nglK-5PU8-KqP1-tEDCHR', 'scsi-0QEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40', 'scsi-SQEMU_QEMU_HARDDISK_54e6ca54-a1fe-4396-8891-5cf52f763d40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54e6ca54', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f7da5590--35e5--5703--96c8--37fe127c27f7-osd--block--f7da5590--35e5--5703--96c8--37fe127c27f7']}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69aee15b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1', 'scsi-SQEMU_QEMU_HARDDISK_69aee15b-d447-41d8-b515-509351298397-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF', 'dm-uuid-CRYPT-LUKS2-d1275021b819484fa475f1a37389bb5c-Bjy5do-WEhe-9rKv-9L58-4cW3-XE9o-TwvrjF'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-14 06:47:28.315666 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:28.315679 | orchestrator | 2026-02-14 06:47:28.315691 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-14 06:47:28.315703 | orchestrator | Saturday 14 February 2026 06:47:16 +0000 (0:00:01.456) 1:10:28.423 ***** 2026-02-14 06:47:28.315761 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:28.315775 | orchestrator | 2026-02-14 06:47:28.315786 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-14 06:47:28.315798 | orchestrator | Saturday 14 February 2026 06:47:17 +0000 (0:00:01.482) 1:10:29.905 ***** 2026-02-14 06:47:28.315809 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:28.315820 | orchestrator | 2026-02-14 06:47:28.315830 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:47:28.315841 | orchestrator | Saturday 14 February 2026 06:47:18 +0000 (0:00:01.140) 1:10:31.046 ***** 2026-02-14 06:47:28.315852 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:47:28.315862 | orchestrator | 2026-02-14 06:47:28.315873 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:47:28.315884 | orchestrator | Saturday 14 February 2026 06:47:20 +0000 (0:00:01.478) 1:10:32.525 ***** 2026-02-14 06:47:28.315894 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:28.315905 | orchestrator | 2026-02-14 06:47:28.315916 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-14 06:47:28.315926 | orchestrator | Saturday 14 February 2026 06:47:21 +0000 (0:00:01.126) 1:10:33.652 ***** 2026-02-14 06:47:28.315937 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:28.315958 | orchestrator | 2026-02-14 06:47:28.315969 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-14 06:47:28.315980 | orchestrator | Saturday 14 February 2026 06:47:23 +0000 (0:00:01.742) 1:10:35.394 ***** 2026-02-14 06:47:28.315990 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:28.316001 | orchestrator | 2026-02-14 06:47:28.316012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-14 06:47:28.316022 | orchestrator | Saturday 14 February 2026 06:47:24 +0000 (0:00:01.148) 1:10:36.543 ***** 2026-02-14 06:47:28.316033 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-14 06:47:28.316044 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-14 06:47:28.316055 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-14 06:47:28.316066 | orchestrator | 2026-02-14 06:47:28.316076 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-14 06:47:28.316087 | orchestrator | Saturday 14 February 2026 06:47:25 +0000 (0:00:01.723) 1:10:38.266 ***** 2026-02-14 06:47:28.316098 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-14 06:47:28.316109 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-14 06:47:28.316120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-14 06:47:28.316130 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:47:28.316141 | orchestrator | 2026-02-14 06:47:28.316152 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-14 06:47:28.316162 | orchestrator | Saturday 14 February 2026 06:47:27 +0000 (0:00:01.171) 1:10:39.438 ***** 2026-02-14 06:47:28.316173 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-14 06:47:28.316185 | orchestrator | 2026-02-14 06:47:28.316204 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:48:10.787507 | orchestrator | Saturday 14 February 2026 06:47:28 +0000 (0:00:01.185) 1:10:40.624 ***** 2026-02-14 06:48:10.787609 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787621 | orchestrator | 2026-02-14 06:48:10.787629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:48:10.787638 | orchestrator | Saturday 14 February 2026 06:47:29 +0000 (0:00:01.136) 1:10:41.761 ***** 2026-02-14 06:48:10.787645 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787652 | orchestrator | 2026-02-14 06:48:10.787660 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:48:10.787667 | orchestrator | Saturday 14 February 2026 06:47:30 +0000 (0:00:01.154) 1:10:42.916 ***** 2026-02-14 06:48:10.787675 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787682 | orchestrator | 2026-02-14 06:48:10.787703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:48:10.787710 | orchestrator | Saturday 14 February 2026 06:47:31 +0000 (0:00:01.156) 1:10:44.073 ***** 2026-02-14 06:48:10.787718 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.787726 | orchestrator | 2026-02-14 06:48:10.787733 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:48:10.787805 | orchestrator | Saturday 14 February 2026 06:47:32 +0000 (0:00:01.211) 1:10:45.284 ***** 2026-02-14 06:48:10.787816 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:48:10.787824 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:48:10.787831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:48:10.787838 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787845 | orchestrator | 2026-02-14 06:48:10.787852 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:48:10.787859 | orchestrator | Saturday 14 February 2026 06:47:34 +0000 (0:00:01.466) 1:10:46.750 ***** 2026-02-14 06:48:10.787867 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:48:10.787874 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:48:10.787906 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:48:10.787914 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787921 | orchestrator | 2026-02-14 06:48:10.787928 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:48:10.787938 | orchestrator | Saturday 14 February 2026 06:47:36 +0000 (0:00:01.820) 1:10:48.570 ***** 2026-02-14 06:48:10.787951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:48:10.787962 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:48:10.787974 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:48:10.787986 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.787998 | orchestrator | 2026-02-14 06:48:10.788011 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:48:10.788023 | orchestrator | Saturday 14 February 2026 06:47:38 +0000 (0:00:01.833) 1:10:50.404 ***** 2026-02-14 06:48:10.788036 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788048 | orchestrator | 2026-02-14 06:48:10.788060 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:48:10.788073 | orchestrator | Saturday 14 February 2026 06:47:39 +0000 (0:00:01.232) 1:10:51.637 ***** 2026-02-14 06:48:10.788087 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:48:10.788100 | orchestrator | 2026-02-14 06:48:10.788114 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-14 06:48:10.788125 | orchestrator | Saturday 14 February 2026 06:47:40 +0000 (0:00:01.414) 1:10:53.051 ***** 2026-02-14 06:48:10.788134 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:48:10.788143 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:48:10.788151 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:48:10.788159 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:48:10.788168 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:48:10.788176 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:48:10.788184 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:48:10.788192 | orchestrator | 2026-02-14 06:48:10.788200 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-14 06:48:10.788208 | orchestrator | Saturday 14 February 2026 06:47:42 +0000 (0:00:01.863) 1:10:54.915 ***** 2026-02-14 06:48:10.788216 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-14 06:48:10.788224 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-14 06:48:10.788232 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-14 06:48:10.788240 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-14 06:48:10.788248 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-14 06:48:10.788255 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-14 06:48:10.788263 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-14 06:48:10.788271 | orchestrator | 2026-02-14 06:48:10.788279 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-14 06:48:10.788287 | orchestrator | Saturday 14 February 2026 06:47:44 +0000 (0:00:02.334) 1:10:57.249 ***** 2026-02-14 06:48:10.788295 | orchestrator | changed: [testbed-node-5] 2026-02-14 06:48:10.788303 | orchestrator | 2026-02-14 06:48:10.788328 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-14 06:48:10.788337 | orchestrator | Saturday 14 February 2026 06:47:46 +0000 (0:00:01.973) 1:10:59.223 ***** 2026-02-14 06:48:10.788355 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:48:10.788364 | orchestrator | 2026-02-14 06:48:10.788372 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-14 06:48:10.788380 | orchestrator | Saturday 14 February 2026 06:47:49 +0000 (0:00:02.431) 1:11:01.654 ***** 2026-02-14 06:48:10.788389 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:48:10.788398 | orchestrator | 2026-02-14 06:48:10.788411 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:48:10.788420 | orchestrator | Saturday 14 February 2026 06:47:51 +0000 (0:00:01.942) 1:11:03.597 ***** 2026-02-14 06:48:10.788427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-14 06:48:10.788434 | orchestrator | 2026-02-14 06:48:10.788441 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:48:10.788448 | orchestrator | Saturday 14 February 2026 06:47:52 +0000 (0:00:01.115) 1:11:04.713 ***** 2026-02-14 06:48:10.788455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-14 06:48:10.788462 | orchestrator | 2026-02-14 06:48:10.788469 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:48:10.788476 | orchestrator | Saturday 14 February 2026 06:47:53 +0000 (0:00:01.145) 1:11:05.859 ***** 2026-02-14 06:48:10.788483 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788490 | orchestrator | 2026-02-14 06:48:10.788497 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:48:10.788504 | orchestrator | Saturday 14 February 2026 06:47:54 +0000 (0:00:01.129) 1:11:06.988 ***** 2026-02-14 06:48:10.788511 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788518 | orchestrator | 2026-02-14 06:48:10.788525 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:48:10.788532 | orchestrator | Saturday 14 February 2026 06:47:56 +0000 (0:00:01.581) 1:11:08.569 ***** 2026-02-14 06:48:10.788539 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788546 | orchestrator | 2026-02-14 06:48:10.788553 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:48:10.788560 | orchestrator | Saturday 14 February 2026 06:47:57 +0000 (0:00:01.541) 1:11:10.111 ***** 2026-02-14 06:48:10.788567 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788574 | orchestrator | 2026-02-14 06:48:10.788581 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:48:10.788588 | orchestrator | Saturday 14 February 2026 06:47:59 +0000 (0:00:01.636) 1:11:11.748 ***** 2026-02-14 06:48:10.788595 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788602 | orchestrator | 2026-02-14 06:48:10.788610 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:48:10.788617 | orchestrator | Saturday 14 February 2026 06:48:00 +0000 (0:00:01.131) 1:11:12.879 ***** 2026-02-14 06:48:10.788624 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788631 | orchestrator | 2026-02-14 06:48:10.788638 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:48:10.788645 | orchestrator | Saturday 14 February 2026 06:48:01 +0000 (0:00:01.117) 1:11:13.997 ***** 2026-02-14 06:48:10.788652 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788659 | orchestrator | 2026-02-14 06:48:10.788666 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:48:10.788673 | orchestrator | Saturday 14 February 2026 06:48:02 +0000 (0:00:01.155) 1:11:15.152 ***** 2026-02-14 06:48:10.788680 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788687 | orchestrator | 2026-02-14 06:48:10.788694 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:48:10.788701 | orchestrator | Saturday 14 February 2026 06:48:04 +0000 (0:00:01.536) 1:11:16.689 ***** 2026-02-14 06:48:10.788714 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788721 | orchestrator | 2026-02-14 06:48:10.788728 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:48:10.788735 | orchestrator | Saturday 14 February 2026 06:48:05 +0000 (0:00:01.587) 1:11:18.277 ***** 2026-02-14 06:48:10.788761 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788775 | orchestrator | 2026-02-14 06:48:10.788782 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:48:10.788789 | orchestrator | Saturday 14 February 2026 06:48:06 +0000 (0:00:00.776) 1:11:19.053 ***** 2026-02-14 06:48:10.788796 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788804 | orchestrator | 2026-02-14 06:48:10.788811 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:48:10.788818 | orchestrator | Saturday 14 February 2026 06:48:07 +0000 (0:00:00.771) 1:11:19.824 ***** 2026-02-14 06:48:10.788825 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788832 | orchestrator | 2026-02-14 06:48:10.788839 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:48:10.788846 | orchestrator | Saturday 14 February 2026 06:48:08 +0000 (0:00:00.796) 1:11:20.621 ***** 2026-02-14 06:48:10.788853 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788860 | orchestrator | 2026-02-14 06:48:10.788867 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:48:10.788874 | orchestrator | Saturday 14 February 2026 06:48:09 +0000 (0:00:00.784) 1:11:21.405 ***** 2026-02-14 06:48:10.788881 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:10.788889 | orchestrator | 2026-02-14 06:48:10.788896 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:48:10.788903 | orchestrator | Saturday 14 February 2026 06:48:09 +0000 (0:00:00.831) 1:11:22.236 ***** 2026-02-14 06:48:10.788910 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:10.788917 | orchestrator | 2026-02-14 06:48:10.788929 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:48:51.555422 | orchestrator | Saturday 14 February 2026 06:48:10 +0000 (0:00:00.863) 1:11:23.100 ***** 2026-02-14 06:48:51.555555 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555566 | orchestrator | 2026-02-14 06:48:51.555574 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:48:51.555582 | orchestrator | Saturday 14 February 2026 06:48:11 +0000 (0:00:00.777) 1:11:23.878 ***** 2026-02-14 06:48:51.555589 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555596 | orchestrator | 2026-02-14 06:48:51.555603 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:48:51.555610 | orchestrator | Saturday 14 February 2026 06:48:12 +0000 (0:00:00.789) 1:11:24.667 ***** 2026-02-14 06:48:51.555617 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.555625 | orchestrator | 2026-02-14 06:48:51.555648 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:48:51.555655 | orchestrator | Saturday 14 February 2026 06:48:13 +0000 (0:00:00.835) 1:11:25.502 ***** 2026-02-14 06:48:51.555662 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.555671 | orchestrator | 2026-02-14 06:48:51.555683 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-14 06:48:51.555695 | orchestrator | Saturday 14 February 2026 06:48:13 +0000 (0:00:00.801) 1:11:26.303 ***** 2026-02-14 06:48:51.555705 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555716 | orchestrator | 2026-02-14 06:48:51.555728 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-14 06:48:51.555739 | orchestrator | Saturday 14 February 2026 06:48:14 +0000 (0:00:00.784) 1:11:27.088 ***** 2026-02-14 06:48:51.555750 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555761 | orchestrator | 2026-02-14 06:48:51.555826 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-14 06:48:51.555838 | orchestrator | Saturday 14 February 2026 06:48:15 +0000 (0:00:00.814) 1:11:27.903 ***** 2026-02-14 06:48:51.555877 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555888 | orchestrator | 2026-02-14 06:48:51.555898 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-14 06:48:51.555908 | orchestrator | Saturday 14 February 2026 06:48:16 +0000 (0:00:00.797) 1:11:28.701 ***** 2026-02-14 06:48:51.555918 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555928 | orchestrator | 2026-02-14 06:48:51.555940 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-14 06:48:51.555953 | orchestrator | Saturday 14 February 2026 06:48:17 +0000 (0:00:00.775) 1:11:29.477 ***** 2026-02-14 06:48:51.555965 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.555978 | orchestrator | 2026-02-14 06:48:51.555991 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-14 06:48:51.556004 | orchestrator | Saturday 14 February 2026 06:48:18 +0000 (0:00:00.982) 1:11:30.459 ***** 2026-02-14 06:48:51.556022 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556035 | orchestrator | 2026-02-14 06:48:51.556047 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-14 06:48:51.556060 | orchestrator | Saturday 14 February 2026 06:48:18 +0000 (0:00:00.817) 1:11:31.276 ***** 2026-02-14 06:48:51.556074 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556086 | orchestrator | 2026-02-14 06:48:51.556099 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-14 06:48:51.556113 | orchestrator | Saturday 14 February 2026 06:48:19 +0000 (0:00:00.780) 1:11:32.057 ***** 2026-02-14 06:48:51.556125 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556137 | orchestrator | 2026-02-14 06:48:51.556150 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-14 06:48:51.556164 | orchestrator | Saturday 14 February 2026 06:48:20 +0000 (0:00:00.889) 1:11:32.946 ***** 2026-02-14 06:48:51.556176 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556187 | orchestrator | 2026-02-14 06:48:51.556200 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-14 06:48:51.556212 | orchestrator | Saturday 14 February 2026 06:48:21 +0000 (0:00:00.776) 1:11:33.723 ***** 2026-02-14 06:48:51.556224 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556236 | orchestrator | 2026-02-14 06:48:51.556248 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-14 06:48:51.556259 | orchestrator | Saturday 14 February 2026 06:48:22 +0000 (0:00:00.793) 1:11:34.517 ***** 2026-02-14 06:48:51.556270 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556282 | orchestrator | 2026-02-14 06:48:51.556295 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-14 06:48:51.556309 | orchestrator | Saturday 14 February 2026 06:48:22 +0000 (0:00:00.790) 1:11:35.307 ***** 2026-02-14 06:48:51.556321 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556334 | orchestrator | 2026-02-14 06:48:51.556346 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-14 06:48:51.556358 | orchestrator | Saturday 14 February 2026 06:48:23 +0000 (0:00:00.777) 1:11:36.085 ***** 2026-02-14 06:48:51.556370 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.556382 | orchestrator | 2026-02-14 06:48:51.556394 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-14 06:48:51.556407 | orchestrator | Saturday 14 February 2026 06:48:25 +0000 (0:00:01.561) 1:11:37.647 ***** 2026-02-14 06:48:51.556419 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.556431 | orchestrator | 2026-02-14 06:48:51.556443 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-14 06:48:51.556454 | orchestrator | Saturday 14 February 2026 06:48:27 +0000 (0:00:01.869) 1:11:39.517 ***** 2026-02-14 06:48:51.556465 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-14 06:48:51.556477 | orchestrator | 2026-02-14 06:48:51.556487 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-14 06:48:51.556510 | orchestrator | Saturday 14 February 2026 06:48:28 +0000 (0:00:01.139) 1:11:40.656 ***** 2026-02-14 06:48:51.556521 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556532 | orchestrator | 2026-02-14 06:48:51.556543 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-14 06:48:51.556575 | orchestrator | Saturday 14 February 2026 06:48:29 +0000 (0:00:01.111) 1:11:41.768 ***** 2026-02-14 06:48:51.556586 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556597 | orchestrator | 2026-02-14 06:48:51.556607 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-14 06:48:51.556617 | orchestrator | Saturday 14 February 2026 06:48:30 +0000 (0:00:01.180) 1:11:42.949 ***** 2026-02-14 06:48:51.556628 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-14 06:48:51.556638 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-14 06:48:51.556649 | orchestrator | 2026-02-14 06:48:51.556660 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-14 06:48:51.556680 | orchestrator | Saturday 14 February 2026 06:48:32 +0000 (0:00:01.802) 1:11:44.752 ***** 2026-02-14 06:48:51.556692 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.556703 | orchestrator | 2026-02-14 06:48:51.556713 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-14 06:48:51.556723 | orchestrator | Saturday 14 February 2026 06:48:33 +0000 (0:00:01.468) 1:11:46.220 ***** 2026-02-14 06:48:51.556733 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556742 | orchestrator | 2026-02-14 06:48:51.556753 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-14 06:48:51.556764 | orchestrator | Saturday 14 February 2026 06:48:35 +0000 (0:00:01.235) 1:11:47.455 ***** 2026-02-14 06:48:51.556811 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556823 | orchestrator | 2026-02-14 06:48:51.556833 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-14 06:48:51.556845 | orchestrator | Saturday 14 February 2026 06:48:35 +0000 (0:00:00.844) 1:11:48.299 ***** 2026-02-14 06:48:51.556854 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556860 | orchestrator | 2026-02-14 06:48:51.556867 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-14 06:48:51.556874 | orchestrator | Saturday 14 February 2026 06:48:36 +0000 (0:00:00.764) 1:11:49.064 ***** 2026-02-14 06:48:51.556881 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-14 06:48:51.556888 | orchestrator | 2026-02-14 06:48:51.556894 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-14 06:48:51.556901 | orchestrator | Saturday 14 February 2026 06:48:37 +0000 (0:00:01.190) 1:11:50.255 ***** 2026-02-14 06:48:51.556908 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.556914 | orchestrator | 2026-02-14 06:48:51.556921 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-14 06:48:51.556928 | orchestrator | Saturday 14 February 2026 06:48:39 +0000 (0:00:01.699) 1:11:51.954 ***** 2026-02-14 06:48:51.556935 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-14 06:48:51.556941 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-14 06:48:51.556948 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-14 06:48:51.556954 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556961 | orchestrator | 2026-02-14 06:48:51.556968 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-14 06:48:51.556974 | orchestrator | Saturday 14 February 2026 06:48:40 +0000 (0:00:01.183) 1:11:53.138 ***** 2026-02-14 06:48:51.556981 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.556988 | orchestrator | 2026-02-14 06:48:51.556995 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-14 06:48:51.557001 | orchestrator | Saturday 14 February 2026 06:48:41 +0000 (0:00:01.116) 1:11:54.255 ***** 2026-02-14 06:48:51.557016 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.557023 | orchestrator | 2026-02-14 06:48:51.557030 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-14 06:48:51.557036 | orchestrator | Saturday 14 February 2026 06:48:43 +0000 (0:00:01.165) 1:11:55.420 ***** 2026-02-14 06:48:51.557043 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.557050 | orchestrator | 2026-02-14 06:48:51.557056 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-14 06:48:51.557063 | orchestrator | Saturday 14 February 2026 06:48:44 +0000 (0:00:01.149) 1:11:56.570 ***** 2026-02-14 06:48:51.557069 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.557076 | orchestrator | 2026-02-14 06:48:51.557083 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-14 06:48:51.557089 | orchestrator | Saturday 14 February 2026 06:48:45 +0000 (0:00:01.208) 1:11:57.778 ***** 2026-02-14 06:48:51.557096 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.557103 | orchestrator | 2026-02-14 06:48:51.557109 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-14 06:48:51.557116 | orchestrator | Saturday 14 February 2026 06:48:46 +0000 (0:00:00.804) 1:11:58.583 ***** 2026-02-14 06:48:51.557123 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.557129 | orchestrator | 2026-02-14 06:48:51.557136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-14 06:48:51.557143 | orchestrator | Saturday 14 February 2026 06:48:48 +0000 (0:00:02.125) 1:12:00.708 ***** 2026-02-14 06:48:51.557149 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:48:51.557156 | orchestrator | 2026-02-14 06:48:51.557163 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-14 06:48:51.557169 | orchestrator | Saturday 14 February 2026 06:48:49 +0000 (0:00:00.860) 1:12:01.569 ***** 2026-02-14 06:48:51.557176 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-14 06:48:51.557182 | orchestrator | 2026-02-14 06:48:51.557189 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-14 06:48:51.557196 | orchestrator | Saturday 14 February 2026 06:48:50 +0000 (0:00:01.145) 1:12:02.715 ***** 2026-02-14 06:48:51.557202 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:48:51.557209 | orchestrator | 2026-02-14 06:48:51.557216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-14 06:48:51.557231 | orchestrator | Saturday 14 February 2026 06:48:51 +0000 (0:00:01.152) 1:12:03.867 ***** 2026-02-14 06:49:32.795033 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795153 | orchestrator | 2026-02-14 06:49:32.795170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-14 06:49:32.795183 | orchestrator | Saturday 14 February 2026 06:48:52 +0000 (0:00:01.156) 1:12:05.024 ***** 2026-02-14 06:49:32.795195 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795206 | orchestrator | 2026-02-14 06:49:32.795218 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-14 06:49:32.795229 | orchestrator | Saturday 14 February 2026 06:48:53 +0000 (0:00:01.249) 1:12:06.274 ***** 2026-02-14 06:49:32.795240 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795251 | orchestrator | 2026-02-14 06:49:32.795280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-14 06:49:32.795292 | orchestrator | Saturday 14 February 2026 06:48:55 +0000 (0:00:01.136) 1:12:07.410 ***** 2026-02-14 06:49:32.795303 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795314 | orchestrator | 2026-02-14 06:49:32.795325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-14 06:49:32.795336 | orchestrator | Saturday 14 February 2026 06:48:56 +0000 (0:00:01.174) 1:12:08.585 ***** 2026-02-14 06:49:32.795347 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795358 | orchestrator | 2026-02-14 06:49:32.795368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-14 06:49:32.795404 | orchestrator | Saturday 14 February 2026 06:48:57 +0000 (0:00:01.164) 1:12:09.750 ***** 2026-02-14 06:49:32.795416 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795427 | orchestrator | 2026-02-14 06:49:32.795438 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-14 06:49:32.795449 | orchestrator | Saturday 14 February 2026 06:48:58 +0000 (0:00:01.130) 1:12:10.881 ***** 2026-02-14 06:49:32.795460 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.795471 | orchestrator | 2026-02-14 06:49:32.795482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-14 06:49:32.795493 | orchestrator | Saturday 14 February 2026 06:48:59 +0000 (0:00:01.129) 1:12:12.010 ***** 2026-02-14 06:49:32.795503 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:49:32.795515 | orchestrator | 2026-02-14 06:49:32.795526 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-14 06:49:32.795537 | orchestrator | Saturday 14 February 2026 06:49:00 +0000 (0:00:00.800) 1:12:12.811 ***** 2026-02-14 06:49:32.795548 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-14 06:49:32.795561 | orchestrator | 2026-02-14 06:49:32.795574 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-14 06:49:32.795586 | orchestrator | Saturday 14 February 2026 06:49:01 +0000 (0:00:01.219) 1:12:14.031 ***** 2026-02-14 06:49:32.795599 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-14 06:49:32.795612 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-14 06:49:32.795625 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-14 06:49:32.795673 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-14 06:49:32.795687 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-14 06:49:32.795700 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-14 06:49:32.795713 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-14 06:49:32.795725 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-14 06:49:32.795738 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-14 06:49:32.795751 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-14 06:49:32.795763 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-14 06:49:32.795776 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-14 06:49:32.795788 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-14 06:49:32.795801 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-14 06:49:32.795814 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-14 06:49:32.795827 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-14 06:49:32.795839 | orchestrator | 2026-02-14 06:49:32.795851 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-14 06:49:32.795864 | orchestrator | Saturday 14 February 2026 06:49:07 +0000 (0:00:06.125) 1:12:20.156 ***** 2026-02-14 06:49:32.795877 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-14 06:49:32.795889 | orchestrator | 2026-02-14 06:49:32.795901 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-14 06:49:32.795914 | orchestrator | Saturday 14 February 2026 06:49:09 +0000 (0:00:01.180) 1:12:21.336 ***** 2026-02-14 06:49:32.795925 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:49:32.795937 | orchestrator | 2026-02-14 06:49:32.795948 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-14 06:49:32.795958 | orchestrator | Saturday 14 February 2026 06:49:10 +0000 (0:00:01.524) 1:12:22.860 ***** 2026-02-14 06:49:32.795969 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:49:32.795989 | orchestrator | 2026-02-14 06:49:32.796000 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-14 06:49:32.796011 | orchestrator | Saturday 14 February 2026 06:49:12 +0000 (0:00:01.645) 1:12:24.506 ***** 2026-02-14 06:49:32.796022 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796033 | orchestrator | 2026-02-14 06:49:32.796044 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-14 06:49:32.796074 | orchestrator | Saturday 14 February 2026 06:49:12 +0000 (0:00:00.796) 1:12:25.302 ***** 2026-02-14 06:49:32.796086 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796097 | orchestrator | 2026-02-14 06:49:32.796108 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-14 06:49:32.796119 | orchestrator | Saturday 14 February 2026 06:49:13 +0000 (0:00:00.815) 1:12:26.118 ***** 2026-02-14 06:49:32.796130 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796141 | orchestrator | 2026-02-14 06:49:32.796152 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-14 06:49:32.796163 | orchestrator | Saturday 14 February 2026 06:49:14 +0000 (0:00:00.809) 1:12:26.927 ***** 2026-02-14 06:49:32.796174 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796185 | orchestrator | 2026-02-14 06:49:32.796203 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-14 06:49:32.796214 | orchestrator | Saturday 14 February 2026 06:49:15 +0000 (0:00:00.875) 1:12:27.802 ***** 2026-02-14 06:49:32.796225 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796236 | orchestrator | 2026-02-14 06:49:32.796247 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-14 06:49:32.796258 | orchestrator | Saturday 14 February 2026 06:49:16 +0000 (0:00:00.797) 1:12:28.600 ***** 2026-02-14 06:49:32.796269 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796280 | orchestrator | 2026-02-14 06:49:32.796291 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-14 06:49:32.796302 | orchestrator | Saturday 14 February 2026 06:49:17 +0000 (0:00:00.774) 1:12:29.374 ***** 2026-02-14 06:49:32.796313 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796324 | orchestrator | 2026-02-14 06:49:32.796335 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-14 06:49:32.796346 | orchestrator | Saturday 14 February 2026 06:49:17 +0000 (0:00:00.819) 1:12:30.194 ***** 2026-02-14 06:49:32.796357 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796368 | orchestrator | 2026-02-14 06:49:32.796379 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-14 06:49:32.796390 | orchestrator | Saturday 14 February 2026 06:49:18 +0000 (0:00:00.784) 1:12:30.979 ***** 2026-02-14 06:49:32.796401 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796412 | orchestrator | 2026-02-14 06:49:32.796423 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-14 06:49:32.796434 | orchestrator | Saturday 14 February 2026 06:49:19 +0000 (0:00:00.779) 1:12:31.758 ***** 2026-02-14 06:49:32.796445 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796456 | orchestrator | 2026-02-14 06:49:32.796467 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-14 06:49:32.796478 | orchestrator | Saturday 14 February 2026 06:49:20 +0000 (0:00:00.783) 1:12:32.542 ***** 2026-02-14 06:49:32.796489 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796500 | orchestrator | 2026-02-14 06:49:32.796511 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-14 06:49:32.796522 | orchestrator | Saturday 14 February 2026 06:49:21 +0000 (0:00:00.826) 1:12:33.369 ***** 2026-02-14 06:49:32.796533 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-14 06:49:32.796544 | orchestrator | 2026-02-14 06:49:32.796555 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-14 06:49:32.796572 | orchestrator | Saturday 14 February 2026 06:49:25 +0000 (0:00:04.020) 1:12:37.390 ***** 2026-02-14 06:49:32.796584 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:49:32.796595 | orchestrator | 2026-02-14 06:49:32.796606 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-14 06:49:32.796617 | orchestrator | Saturday 14 February 2026 06:49:25 +0000 (0:00:00.874) 1:12:38.264 ***** 2026-02-14 06:49:32.796630 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-14 06:49:32.796661 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-14 06:49:32.796674 | orchestrator | 2026-02-14 06:49:32.796685 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-14 06:49:32.796695 | orchestrator | Saturday 14 February 2026 06:49:30 +0000 (0:00:04.480) 1:12:42.745 ***** 2026-02-14 06:49:32.796706 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796717 | orchestrator | 2026-02-14 06:49:32.796728 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-14 06:49:32.796739 | orchestrator | Saturday 14 February 2026 06:49:31 +0000 (0:00:00.770) 1:12:43.516 ***** 2026-02-14 06:49:32.796750 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796761 | orchestrator | 2026-02-14 06:49:32.796772 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-14 06:49:32.796783 | orchestrator | Saturday 14 February 2026 06:49:31 +0000 (0:00:00.780) 1:12:44.297 ***** 2026-02-14 06:49:32.796794 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:49:32.796805 | orchestrator | 2026-02-14 06:49:32.796816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-14 06:49:32.796834 | orchestrator | Saturday 14 February 2026 06:49:32 +0000 (0:00:00.810) 1:12:45.107 ***** 2026-02-14 06:50:38.240624 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.240716 | orchestrator | 2026-02-14 06:50:38.240726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-14 06:50:38.240734 | orchestrator | Saturday 14 February 2026 06:49:33 +0000 (0:00:00.852) 1:12:45.960 ***** 2026-02-14 06:50:38.240741 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.240747 | orchestrator | 2026-02-14 06:50:38.240754 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-14 06:50:38.240760 | orchestrator | Saturday 14 February 2026 06:49:34 +0000 (0:00:00.810) 1:12:46.770 ***** 2026-02-14 06:50:38.240780 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:50:38.240788 | orchestrator | 2026-02-14 06:50:38.240794 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-14 06:50:38.240800 | orchestrator | Saturday 14 February 2026 06:49:35 +0000 (0:00:01.038) 1:12:47.809 ***** 2026-02-14 06:50:38.240807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:50:38.240813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:50:38.240820 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:50:38.240826 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.240832 | orchestrator | 2026-02-14 06:50:38.240838 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-14 06:50:38.240844 | orchestrator | Saturday 14 February 2026 06:49:36 +0000 (0:00:01.093) 1:12:48.902 ***** 2026-02-14 06:50:38.240868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:50:38.240875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:50:38.240881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:50:38.240887 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.240893 | orchestrator | 2026-02-14 06:50:38.240899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-14 06:50:38.240905 | orchestrator | Saturday 14 February 2026 06:49:37 +0000 (0:00:01.054) 1:12:49.957 ***** 2026-02-14 06:50:38.240911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-14 06:50:38.240917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-14 06:50:38.240924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-14 06:50:38.240931 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.240937 | orchestrator | 2026-02-14 06:50:38.240943 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-14 06:50:38.240949 | orchestrator | Saturday 14 February 2026 06:49:38 +0000 (0:00:01.089) 1:12:51.046 ***** 2026-02-14 06:50:38.240955 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:50:38.240961 | orchestrator | 2026-02-14 06:50:38.240967 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-14 06:50:38.240974 | orchestrator | Saturday 14 February 2026 06:49:39 +0000 (0:00:00.815) 1:12:51.861 ***** 2026-02-14 06:50:38.240980 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-14 06:50:38.240986 | orchestrator | 2026-02-14 06:50:38.240992 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-14 06:50:38.240998 | orchestrator | Saturday 14 February 2026 06:49:40 +0000 (0:00:01.076) 1:12:52.938 ***** 2026-02-14 06:50:38.241004 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:50:38.241010 | orchestrator | 2026-02-14 06:50:38.241016 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-14 06:50:38.241022 | orchestrator | Saturday 14 February 2026 06:49:42 +0000 (0:00:01.423) 1:12:54.362 ***** 2026-02-14 06:50:38.241028 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-14 06:50:38.241035 | orchestrator | 2026-02-14 06:50:38.241041 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:50:38.241047 | orchestrator | Saturday 14 February 2026 06:49:43 +0000 (0:00:01.144) 1:12:55.507 ***** 2026-02-14 06:50:38.241053 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:50:38.241059 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:50:38.241065 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:50:38.241072 | orchestrator | 2026-02-14 06:50:38.241078 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:50:38.241084 | orchestrator | Saturday 14 February 2026 06:49:46 +0000 (0:00:03.248) 1:12:58.756 ***** 2026-02-14 06:50:38.241090 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-14 06:50:38.241096 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-14 06:50:38.241102 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:50:38.241108 | orchestrator | 2026-02-14 06:50:38.241115 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-14 06:50:38.241121 | orchestrator | Saturday 14 February 2026 06:49:48 +0000 (0:00:02.021) 1:13:00.778 ***** 2026-02-14 06:50:38.241127 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.241133 | orchestrator | 2026-02-14 06:50:38.241139 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-14 06:50:38.241145 | orchestrator | Saturday 14 February 2026 06:49:49 +0000 (0:00:00.777) 1:13:01.555 ***** 2026-02-14 06:50:38.241151 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-14 06:50:38.241158 | orchestrator | 2026-02-14 06:50:38.241164 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-14 06:50:38.241175 | orchestrator | Saturday 14 February 2026 06:49:50 +0000 (0:00:01.290) 1:13:02.845 ***** 2026-02-14 06:50:38.241182 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:50:38.241189 | orchestrator | 2026-02-14 06:50:38.241196 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-14 06:50:38.241202 | orchestrator | Saturday 14 February 2026 06:49:52 +0000 (0:00:01.652) 1:13:04.497 ***** 2026-02-14 06:50:38.241221 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:50:38.241229 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-14 06:50:38.241236 | orchestrator | 2026-02-14 06:50:38.241242 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-14 06:50:38.241248 | orchestrator | Saturday 14 February 2026 06:49:57 +0000 (0:00:05.023) 1:13:09.521 ***** 2026-02-14 06:50:38.241258 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-14 06:50:38.241264 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-14 06:50:38.241270 | orchestrator | 2026-02-14 06:50:38.241277 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-14 06:50:38.241283 | orchestrator | Saturday 14 February 2026 06:50:00 +0000 (0:00:03.034) 1:13:12.556 ***** 2026-02-14 06:50:38.241289 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-14 06:50:38.241295 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:50:38.241302 | orchestrator | 2026-02-14 06:50:38.241308 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-14 06:50:38.241314 | orchestrator | Saturday 14 February 2026 06:50:01 +0000 (0:00:01.673) 1:13:14.229 ***** 2026-02-14 06:50:38.241320 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-14 06:50:38.241326 | orchestrator | 2026-02-14 06:50:38.241332 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-14 06:50:38.241339 | orchestrator | Saturday 14 February 2026 06:50:03 +0000 (0:00:01.155) 1:13:15.385 ***** 2026-02-14 06:50:38.241345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241377 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.241383 | orchestrator | 2026-02-14 06:50:38.241390 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-14 06:50:38.241396 | orchestrator | Saturday 14 February 2026 06:50:04 +0000 (0:00:01.641) 1:13:17.026 ***** 2026-02-14 06:50:38.241402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-14 06:50:38.241458 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.241465 | orchestrator | 2026-02-14 06:50:38.241471 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-14 06:50:38.241477 | orchestrator | Saturday 14 February 2026 06:50:06 +0000 (0:00:02.008) 1:13:19.035 ***** 2026-02-14 06:50:38.241483 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:50:38.241489 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:50:38.241495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:50:38.241501 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:50:38.241508 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-14 06:50:38.241515 | orchestrator | 2026-02-14 06:50:38.241521 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-14 06:50:38.241527 | orchestrator | Saturday 14 February 2026 06:50:37 +0000 (0:00:30.700) 1:13:49.735 ***** 2026-02-14 06:50:38.241533 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:50:38.241539 | orchestrator | 2026-02-14 06:50:38.241545 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-14 06:50:38.241555 | orchestrator | Saturday 14 February 2026 06:50:38 +0000 (0:00:00.815) 1:13:50.551 ***** 2026-02-14 06:51:31.422699 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.422789 | orchestrator | 2026-02-14 06:51:31.422797 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-14 06:51:31.422805 | orchestrator | Saturday 14 February 2026 06:50:39 +0000 (0:00:00.779) 1:13:51.331 ***** 2026-02-14 06:51:31.422811 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-14 06:51:31.422817 | orchestrator | 2026-02-14 06:51:31.422823 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-14 06:51:31.422841 | orchestrator | Saturday 14 February 2026 06:50:40 +0000 (0:00:01.296) 1:13:52.627 ***** 2026-02-14 06:51:31.422847 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-14 06:51:31.422852 | orchestrator | 2026-02-14 06:51:31.422858 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-14 06:51:31.422863 | orchestrator | Saturday 14 February 2026 06:50:41 +0000 (0:00:01.098) 1:13:53.726 ***** 2026-02-14 06:51:31.422869 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.422875 | orchestrator | 2026-02-14 06:51:31.422880 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-14 06:51:31.422886 | orchestrator | Saturday 14 February 2026 06:50:43 +0000 (0:00:02.065) 1:13:55.791 ***** 2026-02-14 06:51:31.422891 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.422897 | orchestrator | 2026-02-14 06:51:31.422902 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-14 06:51:31.422907 | orchestrator | Saturday 14 February 2026 06:50:45 +0000 (0:00:01.968) 1:13:57.760 ***** 2026-02-14 06:51:31.422913 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.422918 | orchestrator | 2026-02-14 06:51:31.422924 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-14 06:51:31.422929 | orchestrator | Saturday 14 February 2026 06:50:47 +0000 (0:00:02.236) 1:13:59.996 ***** 2026-02-14 06:51:31.422935 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-14 06:51:31.422960 | orchestrator | 2026-02-14 06:51:31.422966 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-14 06:51:31.422971 | orchestrator | skipping: no hosts matched 2026-02-14 06:51:31.422976 | orchestrator | 2026-02-14 06:51:31.422982 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-14 06:51:31.422987 | orchestrator | skipping: no hosts matched 2026-02-14 06:51:31.422992 | orchestrator | 2026-02-14 06:51:31.422998 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-14 06:51:31.423003 | orchestrator | skipping: no hosts matched 2026-02-14 06:51:31.423008 | orchestrator | 2026-02-14 06:51:31.423014 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-14 06:51:31.423019 | orchestrator | 2026-02-14 06:51:31.423024 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-14 06:51:31.423030 | orchestrator | Saturday 14 February 2026 06:50:51 +0000 (0:00:04.205) 1:14:04.202 ***** 2026-02-14 06:51:31.423035 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:51:31.423041 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:51:31.423046 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:51:31.423051 | orchestrator | changed: [testbed-node-3] 2026-02-14 06:51:31.423057 | orchestrator | changed: [testbed-node-4] 2026-02-14 06:51:31.423062 | orchestrator | changed: [testbed-node-5] 2026-02-14 06:51:31.423068 | orchestrator | 2026-02-14 06:51:31.423073 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-14 06:51:31.423078 | orchestrator | Saturday 14 February 2026 06:50:54 +0000 (0:00:02.903) 1:14:07.105 ***** 2026-02-14 06:51:31.423084 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:51:31.423089 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:51:31.423094 | orchestrator | changed: [testbed-node-3] 2026-02-14 06:51:31.423099 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:51:31.423105 | orchestrator | changed: [testbed-node-4] 2026-02-14 06:51:31.423110 | orchestrator | changed: [testbed-node-5] 2026-02-14 06:51:31.423115 | orchestrator | 2026-02-14 06:51:31.423120 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:51:31.423126 | orchestrator | Saturday 14 February 2026 06:50:58 +0000 (0:00:03.486) 1:14:10.592 ***** 2026-02-14 06:51:31.423131 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423137 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423142 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423147 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423152 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423158 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423163 | orchestrator | 2026-02-14 06:51:31.423168 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:51:31.423174 | orchestrator | Saturday 14 February 2026 06:51:00 +0000 (0:00:02.200) 1:14:12.793 ***** 2026-02-14 06:51:31.423179 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423185 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423190 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423195 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423200 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423205 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423211 | orchestrator | 2026-02-14 06:51:31.423216 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-14 06:51:31.423221 | orchestrator | Saturday 14 February 2026 06:51:02 +0000 (0:00:02.239) 1:14:15.032 ***** 2026-02-14 06:51:31.423228 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 06:51:31.423262 | orchestrator | 2026-02-14 06:51:31.423275 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-14 06:51:31.423282 | orchestrator | Saturday 14 February 2026 06:51:04 +0000 (0:00:02.151) 1:14:17.184 ***** 2026-02-14 06:51:31.423294 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 06:51:31.423300 | orchestrator | 2026-02-14 06:51:31.423317 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-14 06:51:31.423324 | orchestrator | Saturday 14 February 2026 06:51:07 +0000 (0:00:02.249) 1:14:19.433 ***** 2026-02-14 06:51:31.423330 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423336 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423342 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423348 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423354 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423361 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423367 | orchestrator | 2026-02-14 06:51:31.423377 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-14 06:51:31.423383 | orchestrator | Saturday 14 February 2026 06:51:09 +0000 (0:00:02.100) 1:14:21.534 ***** 2026-02-14 06:51:31.423390 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423396 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423402 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423408 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423414 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423420 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423427 | orchestrator | 2026-02-14 06:51:31.423433 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-14 06:51:31.423439 | orchestrator | Saturday 14 February 2026 06:51:11 +0000 (0:00:02.649) 1:14:24.184 ***** 2026-02-14 06:51:31.423445 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423451 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423457 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423464 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423470 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423476 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423482 | orchestrator | 2026-02-14 06:51:31.423488 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-14 06:51:31.423495 | orchestrator | Saturday 14 February 2026 06:51:14 +0000 (0:00:02.437) 1:14:26.622 ***** 2026-02-14 06:51:31.423501 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423507 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423513 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423519 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423525 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423531 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423537 | orchestrator | 2026-02-14 06:51:31.423543 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-14 06:51:31.423550 | orchestrator | Saturday 14 February 2026 06:51:16 +0000 (0:00:02.396) 1:14:29.019 ***** 2026-02-14 06:51:31.423556 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423562 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423569 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423575 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423581 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423587 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423593 | orchestrator | 2026-02-14 06:51:31.423599 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-14 06:51:31.423606 | orchestrator | Saturday 14 February 2026 06:51:18 +0000 (0:00:02.036) 1:14:31.055 ***** 2026-02-14 06:51:31.423612 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423619 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423625 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423630 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423636 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423641 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423646 | orchestrator | 2026-02-14 06:51:31.423652 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-14 06:51:31.423661 | orchestrator | Saturday 14 February 2026 06:51:20 +0000 (0:00:01.742) 1:14:32.798 ***** 2026-02-14 06:51:31.423666 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423672 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423677 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423682 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423688 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423693 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423699 | orchestrator | 2026-02-14 06:51:31.423704 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-14 06:51:31.423709 | orchestrator | Saturday 14 February 2026 06:51:22 +0000 (0:00:02.098) 1:14:34.896 ***** 2026-02-14 06:51:31.423715 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423720 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423726 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423731 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423736 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423742 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423747 | orchestrator | 2026-02-14 06:51:31.423752 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-14 06:51:31.423758 | orchestrator | Saturday 14 February 2026 06:51:24 +0000 (0:00:02.241) 1:14:37.138 ***** 2026-02-14 06:51:31.423763 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423769 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423774 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423779 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:51:31.423785 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:51:31.423790 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:51:31.423797 | orchestrator | 2026-02-14 06:51:31.423805 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-14 06:51:31.423813 | orchestrator | Saturday 14 February 2026 06:51:27 +0000 (0:00:02.543) 1:14:39.682 ***** 2026-02-14 06:51:31.423820 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:51:31.423829 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:51:31.423836 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:51:31.423844 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423852 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423860 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423867 | orchestrator | 2026-02-14 06:51:31.423875 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-14 06:51:31.423883 | orchestrator | Saturday 14 February 2026 06:51:29 +0000 (0:00:01.843) 1:14:41.526 ***** 2026-02-14 06:51:31.423891 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:51:31.423899 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:51:31.423906 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:51:31.423914 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:51:31.423922 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:51:31.423930 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:51:31.423937 | orchestrator | 2026-02-14 06:51:31.423950 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-14 06:52:27.845261 | orchestrator | Saturday 14 February 2026 06:51:31 +0000 (0:00:02.196) 1:14:43.722 ***** 2026-02-14 06:52:27.845380 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.845400 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.845412 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.845423 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.845436 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.845447 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.845458 | orchestrator | 2026-02-14 06:52:27.845485 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-14 06:52:27.845497 | orchestrator | Saturday 14 February 2026 06:51:33 +0000 (0:00:01.768) 1:14:45.491 ***** 2026-02-14 06:52:27.845508 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.845519 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.845554 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.845566 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.845577 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.845588 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.845598 | orchestrator | 2026-02-14 06:52:27.845609 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-14 06:52:27.845620 | orchestrator | Saturday 14 February 2026 06:51:35 +0000 (0:00:02.117) 1:14:47.608 ***** 2026-02-14 06:52:27.845631 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.845642 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.845652 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.845663 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.845674 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.845684 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.845695 | orchestrator | 2026-02-14 06:52:27.845708 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-14 06:52:27.845720 | orchestrator | Saturday 14 February 2026 06:51:37 +0000 (0:00:01.836) 1:14:49.445 ***** 2026-02-14 06:52:27.845732 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.845744 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.845757 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.845770 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.845783 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.845794 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.845806 | orchestrator | 2026-02-14 06:52:27.845819 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-14 06:52:27.845831 | orchestrator | Saturday 14 February 2026 06:51:39 +0000 (0:00:01.992) 1:14:51.437 ***** 2026-02-14 06:52:27.845843 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.845856 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.845868 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.845880 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.845891 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.845903 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.845916 | orchestrator | 2026-02-14 06:52:27.845928 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-14 06:52:27.845939 | orchestrator | Saturday 14 February 2026 06:51:40 +0000 (0:00:01.797) 1:14:53.235 ***** 2026-02-14 06:52:27.845952 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.845964 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.845976 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.845988 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.846000 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.846011 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.846105 | orchestrator | 2026-02-14 06:52:27.846117 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-14 06:52:27.846127 | orchestrator | Saturday 14 February 2026 06:51:42 +0000 (0:00:01.762) 1:14:54.997 ***** 2026-02-14 06:52:27.846138 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846149 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846159 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846170 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.846180 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.846190 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.846201 | orchestrator | 2026-02-14 06:52:27.846212 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-14 06:52:27.846223 | orchestrator | Saturday 14 February 2026 06:51:44 +0000 (0:00:02.145) 1:14:57.143 ***** 2026-02-14 06:52:27.846234 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846244 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846254 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846265 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.846275 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.846285 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.846296 | orchestrator | 2026-02-14 06:52:27.846316 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-14 06:52:27.846326 | orchestrator | Saturday 14 February 2026 06:51:47 +0000 (0:00:02.854) 1:14:59.998 ***** 2026-02-14 06:52:27.846337 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846348 | orchestrator | 2026-02-14 06:52:27.846359 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-14 06:52:27.846369 | orchestrator | Saturday 14 February 2026 06:51:50 +0000 (0:00:03.122) 1:15:03.120 ***** 2026-02-14 06:52:27.846380 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846390 | orchestrator | 2026-02-14 06:52:27.846401 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-14 06:52:27.846412 | orchestrator | Saturday 14 February 2026 06:51:53 +0000 (0:00:03.039) 1:15:06.160 ***** 2026-02-14 06:52:27.846422 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846433 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846443 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846454 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.846464 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.846475 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.846485 | orchestrator | 2026-02-14 06:52:27.846496 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-14 06:52:27.846506 | orchestrator | Saturday 14 February 2026 06:51:56 +0000 (0:00:02.588) 1:15:08.748 ***** 2026-02-14 06:52:27.846517 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846527 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846538 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846548 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.846558 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.846569 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.846579 | orchestrator | 2026-02-14 06:52:27.846590 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-14 06:52:27.846620 | orchestrator | Saturday 14 February 2026 06:51:58 +0000 (0:00:02.510) 1:15:11.259 ***** 2026-02-14 06:52:27.846633 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-14 06:52:27.846645 | orchestrator | 2026-02-14 06:52:27.846656 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-14 06:52:27.846673 | orchestrator | Saturday 14 February 2026 06:52:01 +0000 (0:00:02.624) 1:15:13.884 ***** 2026-02-14 06:52:27.846684 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846694 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846705 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846715 | orchestrator | ok: [testbed-node-3] 2026-02-14 06:52:27.846725 | orchestrator | ok: [testbed-node-4] 2026-02-14 06:52:27.846736 | orchestrator | ok: [testbed-node-5] 2026-02-14 06:52:27.846746 | orchestrator | 2026-02-14 06:52:27.846757 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-14 06:52:27.846768 | orchestrator | Saturday 14 February 2026 06:52:04 +0000 (0:00:02.542) 1:15:16.426 ***** 2026-02-14 06:52:27.846778 | orchestrator | changed: [testbed-node-3] 2026-02-14 06:52:27.846789 | orchestrator | changed: [testbed-node-4] 2026-02-14 06:52:27.846800 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:52:27.846810 | orchestrator | changed: [testbed-node-1] 2026-02-14 06:52:27.846821 | orchestrator | changed: [testbed-node-5] 2026-02-14 06:52:27.846831 | orchestrator | changed: [testbed-node-2] 2026-02-14 06:52:27.846842 | orchestrator | 2026-02-14 06:52:27.846852 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-14 06:52:27.846863 | orchestrator | 2026-02-14 06:52:27.846874 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:52:27.846885 | orchestrator | Saturday 14 February 2026 06:52:09 +0000 (0:00:05.293) 1:15:21.720 ***** 2026-02-14 06:52:27.846895 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846906 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846917 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846934 | orchestrator | 2026-02-14 06:52:27.846945 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:52:27.846955 | orchestrator | Saturday 14 February 2026 06:52:11 +0000 (0:00:01.689) 1:15:23.410 ***** 2026-02-14 06:52:27.846966 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.846976 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:52:27.846987 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:52:27.846997 | orchestrator | 2026-02-14 06:52:27.847008 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-14 06:52:27.847019 | orchestrator | Saturday 14 February 2026 06:52:12 +0000 (0:00:01.394) 1:15:24.804 ***** 2026-02-14 06:52:27.847030 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:52:27.847040 | orchestrator | 2026-02-14 06:52:27.847051 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-14 06:52:27.847061 | orchestrator | Saturday 14 February 2026 06:52:14 +0000 (0:00:02.342) 1:15:27.147 ***** 2026-02-14 06:52:27.847072 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847099 | orchestrator | 2026-02-14 06:52:27.847110 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-14 06:52:27.847120 | orchestrator | 2026-02-14 06:52:27.847131 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-14 06:52:27.847142 | orchestrator | Saturday 14 February 2026 06:52:17 +0000 (0:00:02.370) 1:15:29.518 ***** 2026-02-14 06:52:27.847153 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847163 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.847174 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.847185 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.847195 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.847206 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.847217 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:52:27.847227 | orchestrator | 2026-02-14 06:52:27.847238 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:52:27.847249 | orchestrator | Saturday 14 February 2026 06:52:19 +0000 (0:00:02.203) 1:15:31.721 ***** 2026-02-14 06:52:27.847259 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847270 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.847281 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.847291 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.847302 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.847312 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.847323 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:52:27.847333 | orchestrator | 2026-02-14 06:52:27.847344 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-14 06:52:27.847355 | orchestrator | Saturday 14 February 2026 06:52:21 +0000 (0:00:02.517) 1:15:34.239 ***** 2026-02-14 06:52:27.847365 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847376 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.847386 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.847397 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.847407 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.847418 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.847428 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:52:27.847439 | orchestrator | 2026-02-14 06:52:27.847449 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-14 06:52:27.847460 | orchestrator | Saturday 14 February 2026 06:52:24 +0000 (0:00:02.839) 1:15:37.079 ***** 2026-02-14 06:52:27.847471 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847481 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.847492 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.847503 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:52:27.847513 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:52:27.847523 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:52:27.847534 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:52:27.847552 | orchestrator | 2026-02-14 06:52:27.847563 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-14 06:52:27.847573 | orchestrator | Saturday 14 February 2026 06:52:27 +0000 (0:00:02.493) 1:15:39.572 ***** 2026-02-14 06:52:27.847584 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:52:27.847595 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:52:27.847605 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:52:27.847623 | orchestrator | skipping: [testbed-node-3] 2026-02-14 06:53:17.770621 | orchestrator | skipping: [testbed-node-4] 2026-02-14 06:53:17.770744 | orchestrator | skipping: [testbed-node-5] 2026-02-14 06:53:17.770760 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.770772 | orchestrator | 2026-02-14 06:53:17.770784 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-14 06:53:17.770796 | orchestrator | 2026-02-14 06:53:17.770807 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-14 06:53:17.770836 | orchestrator | Saturday 14 February 2026 06:52:30 +0000 (0:00:03.012) 1:15:42.585 ***** 2026-02-14 06:53:17.770849 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-14 06:53:17.770861 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-14 06:53:17.770871 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-14 06:53:17.770882 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.770893 | orchestrator | 2026-02-14 06:53:17.770904 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-14 06:53:17.770919 | orchestrator | Saturday 14 February 2026 06:52:31 +0000 (0:00:01.178) 1:15:43.763 ***** 2026-02-14 06:53:17.770940 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771006 | orchestrator | 2026-02-14 06:53:17.771017 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-14 06:53:17.771028 | orchestrator | Saturday 14 February 2026 06:52:32 +0000 (0:00:01.107) 1:15:44.871 ***** 2026-02-14 06:53:17.771039 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771050 | orchestrator | 2026-02-14 06:53:17.771061 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-14 06:53:17.771072 | orchestrator | Saturday 14 February 2026 06:52:33 +0000 (0:00:01.196) 1:15:46.068 ***** 2026-02-14 06:53:17.771082 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771093 | orchestrator | 2026-02-14 06:53:17.771104 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-14 06:53:17.771199 | orchestrator | Saturday 14 February 2026 06:52:34 +0000 (0:00:01.144) 1:15:47.213 ***** 2026-02-14 06:53:17.771212 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771225 | orchestrator | 2026-02-14 06:53:17.771237 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-14 06:53:17.771250 | orchestrator | Saturday 14 February 2026 06:52:36 +0000 (0:00:01.357) 1:15:48.570 ***** 2026-02-14 06:53:17.771261 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-14 06:53:17.771274 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-14 06:53:17.771287 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771299 | orchestrator | 2026-02-14 06:53:17.771311 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-14 06:53:17.771323 | orchestrator | Saturday 14 February 2026 06:52:37 +0000 (0:00:01.170) 1:15:49.741 ***** 2026-02-14 06:53:17.771335 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771348 | orchestrator | 2026-02-14 06:53:17.771359 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-14 06:53:17.771386 | orchestrator | Saturday 14 February 2026 06:52:38 +0000 (0:00:01.191) 1:15:50.932 ***** 2026-02-14 06:53:17.771399 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771411 | orchestrator | 2026-02-14 06:53:17.771423 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-14 06:53:17.771436 | orchestrator | Saturday 14 February 2026 06:52:39 +0000 (0:00:01.122) 1:15:52.055 ***** 2026-02-14 06:53:17.771489 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771502 | orchestrator | 2026-02-14 06:53:17.771514 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-14 06:53:17.771526 | orchestrator | Saturday 14 February 2026 06:52:40 +0000 (0:00:01.161) 1:15:53.217 ***** 2026-02-14 06:53:17.771536 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-14 06:53:17.771547 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-14 06:53:17.771557 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771568 | orchestrator | 2026-02-14 06:53:17.771579 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-14 06:53:17.771590 | orchestrator | Saturday 14 February 2026 06:52:42 +0000 (0:00:01.115) 1:15:54.332 ***** 2026-02-14 06:53:17.771600 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771611 | orchestrator | 2026-02-14 06:53:17.771621 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-14 06:53:17.771632 | orchestrator | Saturday 14 February 2026 06:52:43 +0000 (0:00:01.167) 1:15:55.500 ***** 2026-02-14 06:53:17.771643 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771653 | orchestrator | 2026-02-14 06:53:17.771664 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-14 06:53:17.771675 | orchestrator | Saturday 14 February 2026 06:52:44 +0000 (0:00:01.117) 1:15:56.617 ***** 2026-02-14 06:53:17.771685 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771696 | orchestrator | 2026-02-14 06:53:17.771707 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-14 06:53:17.771717 | orchestrator | Saturday 14 February 2026 06:52:45 +0000 (0:00:01.207) 1:15:57.825 ***** 2026-02-14 06:53:17.771728 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:17.771739 | orchestrator | 2026-02-14 06:53:17.771749 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-14 06:53:17.771760 | orchestrator | 2026-02-14 06:53:17.771770 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-14 06:53:17.771781 | orchestrator | Saturday 14 February 2026 06:52:47 +0000 (0:00:01.986) 1:15:59.812 ***** 2026-02-14 06:53:17.771792 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.771802 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.771813 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.771823 | orchestrator | 2026-02-14 06:53:17.771834 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-14 06:53:17.771845 | orchestrator | Saturday 14 February 2026 06:52:49 +0000 (0:00:01.728) 1:16:01.541 ***** 2026-02-14 06:53:17.771856 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.771866 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.771897 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.771909 | orchestrator | 2026-02-14 06:53:17.771919 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-14 06:53:17.771930 | orchestrator | Saturday 14 February 2026 06:52:50 +0000 (0:00:01.471) 1:16:03.012 ***** 2026-02-14 06:53:17.771941 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.772009 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.772030 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.772048 | orchestrator | 2026-02-14 06:53:17.772070 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-14 06:53:17.772081 | orchestrator | Saturday 14 February 2026 06:52:52 +0000 (0:00:01.535) 1:16:04.548 ***** 2026-02-14 06:53:17.772092 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.772102 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.772112 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.772123 | orchestrator | 2026-02-14 06:53:17.772134 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-14 06:53:17.772144 | orchestrator | Saturday 14 February 2026 06:52:53 +0000 (0:00:01.443) 1:16:05.992 ***** 2026-02-14 06:53:17.772165 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.772176 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.772186 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.772197 | orchestrator | 2026-02-14 06:53:17.772207 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-14 06:53:17.772218 | orchestrator | Saturday 14 February 2026 06:52:55 +0000 (0:00:01.459) 1:16:07.451 ***** 2026-02-14 06:53:17.772228 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.772239 | orchestrator | skipping: [testbed-node-1] 2026-02-14 06:53:17.772249 | orchestrator | skipping: [testbed-node-2] 2026-02-14 06:53:17.772260 | orchestrator | 2026-02-14 06:53:17.772270 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-14 06:53:17.772281 | orchestrator | Saturday 14 February 2026 06:52:56 +0000 (0:00:01.813) 1:16:09.264 ***** 2026-02-14 06:53:17.772291 | orchestrator | skipping: [testbed-node-0] 2026-02-14 06:53:17.772302 | orchestrator | 2026-02-14 06:53:17.772313 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-14 06:53:17.772323 | orchestrator | 2026-02-14 06:53:17.772334 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-14 06:53:17.772344 | orchestrator | Saturday 14 February 2026 06:52:58 +0000 (0:00:01.589) 1:16:10.853 ***** 2026-02-14 06:53:17.772355 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772366 | orchestrator | 2026-02-14 06:53:17.772376 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-14 06:53:17.772387 | orchestrator | Saturday 14 February 2026 06:52:59 +0000 (0:00:01.468) 1:16:12.322 ***** 2026-02-14 06:53:17.772398 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772408 | orchestrator | 2026-02-14 06:53:17.772418 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-14 06:53:17.772429 | orchestrator | Saturday 14 February 2026 06:53:01 +0000 (0:00:01.126) 1:16:13.448 ***** 2026-02-14 06:53:17.772439 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772450 | orchestrator | 2026-02-14 06:53:17.772461 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-14 06:53:17.772471 | orchestrator | Saturday 14 February 2026 06:53:02 +0000 (0:00:01.128) 1:16:14.577 ***** 2026-02-14 06:53:17.772482 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772493 | orchestrator | 2026-02-14 06:53:17.772503 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-14 06:53:17.772514 | orchestrator | Saturday 14 February 2026 06:53:05 +0000 (0:00:02.889) 1:16:17.466 ***** 2026-02-14 06:53:17.772525 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772535 | orchestrator | 2026-02-14 06:53:17.772546 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-14 06:53:17.772556 | orchestrator | Saturday 14 February 2026 06:53:08 +0000 (0:00:03.174) 1:16:20.641 ***** 2026-02-14 06:53:17.772567 | orchestrator | changed: [testbed-node-0] 2026-02-14 06:53:17.772578 | orchestrator | 2026-02-14 06:53:17.772588 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-14 06:53:17.772599 | orchestrator | 2026-02-14 06:53:17.772609 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-14 06:53:17.772620 | orchestrator | Saturday 14 February 2026 06:53:10 +0000 (0:00:01.877) 1:16:22.518 ***** 2026-02-14 06:53:17.772630 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772641 | orchestrator | ok: [testbed-node-1] 2026-02-14 06:53:17.772652 | orchestrator | ok: [testbed-node-2] 2026-02-14 06:53:17.772662 | orchestrator | 2026-02-14 06:53:17.772673 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-14 06:53:17.772683 | orchestrator | Saturday 14 February 2026 06:53:11 +0000 (0:00:01.571) 1:16:24.090 ***** 2026-02-14 06:53:17.772694 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772705 | orchestrator | 2026-02-14 06:53:17.772715 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-14 06:53:17.772726 | orchestrator | Saturday 14 February 2026 06:53:14 +0000 (0:00:02.366) 1:16:26.457 ***** 2026-02-14 06:53:17.772744 | orchestrator | ok: [testbed-node-0] 2026-02-14 06:53:17.772755 | orchestrator | 2026-02-14 06:53:17.772766 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 06:53:17.772777 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-14 06:53:17.772790 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-14 06:53:17.772802 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-14 06:53:17.772812 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-14 06:53:17.772831 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-14 06:53:18.777909 | orchestrator | testbed-node-3 : ok=311  changed=21  unreachable=0 failed=0 skipped=341  rescued=0 ignored=0 2026-02-14 06:53:18.778082 | orchestrator | testbed-node-4 : ok=308  changed=16  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-02-14 06:53:18.778095 | orchestrator | testbed-node-5 : ok=308  changed=17  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-14 06:53:18.778103 | orchestrator | 2026-02-14 06:53:18.778111 | orchestrator | 2026-02-14 06:53:18.778118 | orchestrator | 2026-02-14 06:53:18.778126 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 06:53:18.778134 | orchestrator | Saturday 14 February 2026 06:53:17 +0000 (0:00:03.600) 1:16:30.057 ***** 2026-02-14 06:53:18.778141 | orchestrator | =============================================================================== 2026-02-14 06:53:18.778148 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.93s 2026-02-14 06:53:18.778156 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 75.66s 2026-02-14 06:53:18.778163 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.42s 2026-02-14 06:53:18.778170 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 30.95s 2026-02-14 06:53:18.778177 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.82s 2026-02-14 06:53:18.778184 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.70s 2026-02-14 06:53:18.778191 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.34s 2026-02-14 06:53:18.778198 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.29s 2026-02-14 06:53:18.778205 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.01s 2026-02-14 06:53:18.778212 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.96s 2026-02-14 06:53:18.778219 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.18s 2026-02-14 06:53:18.778226 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 18.27s 2026-02-14 06:53:18.778233 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.70s 2026-02-14 06:53:18.778240 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.06s 2026-02-14 06:53:18.778247 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.25s 2026-02-14 06:53:18.778254 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.76s 2026-02-14 06:53:18.778261 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.44s 2026-02-14 06:53:18.778269 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.37s 2026-02-14 06:53:18.778295 | orchestrator | Restart active mds ----------------------------------------------------- 11.38s 2026-02-14 06:53:18.778302 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.02s 2026-02-14 06:53:19.146582 | orchestrator | + osism apply cephclient 2026-02-14 06:53:21.238125 | orchestrator | 2026-02-14 06:53:21 | INFO  | Task a8387206-f5ba-4942-aa60-0ac2f541838d (cephclient) was prepared for execution. 2026-02-14 06:53:21.238228 | orchestrator | 2026-02-14 06:53:21 | INFO  | It takes a moment until task a8387206-f5ba-4942-aa60-0ac2f541838d (cephclient) has been started and output is visible here. 2026-02-14 06:53:49.799616 | orchestrator | 2026-02-14 06:53:49.799713 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-14 06:53:49.799725 | orchestrator | 2026-02-14 06:53:49.799734 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-14 06:53:49.799741 | orchestrator | Saturday 14 February 2026 06:53:27 +0000 (0:00:01.769) 0:00:01.769 ***** 2026-02-14 06:53:49.799749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-14 06:53:49.799758 | orchestrator | 2026-02-14 06:53:49.799766 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-14 06:53:49.799773 | orchestrator | Saturday 14 February 2026 06:53:29 +0000 (0:00:01.860) 0:00:03.629 ***** 2026-02-14 06:53:49.799781 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-14 06:53:49.799788 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-14 06:53:49.799797 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-14 06:53:49.799804 | orchestrator | 2026-02-14 06:53:49.799812 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-14 06:53:49.799819 | orchestrator | Saturday 14 February 2026 06:53:32 +0000 (0:00:02.592) 0:00:06.222 ***** 2026-02-14 06:53:49.799827 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-14 06:53:49.799834 | orchestrator | 2026-02-14 06:53:49.799841 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-14 06:53:49.799849 | orchestrator | Saturday 14 February 2026 06:53:34 +0000 (0:00:02.095) 0:00:08.317 ***** 2026-02-14 06:53:49.799857 | orchestrator | ok: [testbed-manager] 2026-02-14 06:53:49.799865 | orchestrator | 2026-02-14 06:53:49.799931 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-14 06:53:49.799947 | orchestrator | Saturday 14 February 2026 06:53:36 +0000 (0:00:01.996) 0:00:10.314 ***** 2026-02-14 06:53:49.799957 | orchestrator | ok: [testbed-manager] 2026-02-14 06:53:49.799964 | orchestrator | 2026-02-14 06:53:49.799971 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-14 06:53:49.799978 | orchestrator | Saturday 14 February 2026 06:53:38 +0000 (0:00:01.911) 0:00:12.226 ***** 2026-02-14 06:53:49.799986 | orchestrator | ok: [testbed-manager] 2026-02-14 06:53:49.799993 | orchestrator | 2026-02-14 06:53:49.800016 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-14 06:53:49.800024 | orchestrator | Saturday 14 February 2026 06:53:40 +0000 (0:00:02.113) 0:00:14.339 ***** 2026-02-14 06:53:49.800031 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-14 06:53:49.800039 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-14 06:53:49.800047 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-14 06:53:49.800054 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-14 06:53:49.800061 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-14 06:53:49.800068 | orchestrator | 2026-02-14 06:53:49.800075 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-14 06:53:49.800083 | orchestrator | Saturday 14 February 2026 06:53:45 +0000 (0:00:05.009) 0:00:19.348 ***** 2026-02-14 06:53:49.800090 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-14 06:53:49.800118 | orchestrator | 2026-02-14 06:53:49.800125 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-14 06:53:49.800132 | orchestrator | Saturday 14 February 2026 06:53:46 +0000 (0:00:01.427) 0:00:20.776 ***** 2026-02-14 06:53:49.800139 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:49.800147 | orchestrator | 2026-02-14 06:53:49.800154 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-14 06:53:49.800161 | orchestrator | Saturday 14 February 2026 06:53:48 +0000 (0:00:01.165) 0:00:21.942 ***** 2026-02-14 06:53:49.800168 | orchestrator | skipping: [testbed-manager] 2026-02-14 06:53:49.800176 | orchestrator | 2026-02-14 06:53:49.800185 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-14 06:53:49.800193 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-14 06:53:49.800201 | orchestrator | 2026-02-14 06:53:49.800210 | orchestrator | 2026-02-14 06:53:49.800218 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-14 06:53:49.800226 | orchestrator | Saturday 14 February 2026 06:53:49 +0000 (0:00:01.487) 0:00:23.430 ***** 2026-02-14 06:53:49.800235 | orchestrator | =============================================================================== 2026-02-14 06:53:49.800243 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.01s 2026-02-14 06:53:49.800251 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.59s 2026-02-14 06:53:49.800259 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.11s 2026-02-14 06:53:49.800267 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.10s 2026-02-14 06:53:49.800276 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 2.00s 2026-02-14 06:53:49.800286 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.91s 2026-02-14 06:53:49.800296 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.86s 2026-02-14 06:53:49.800305 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.48s 2026-02-14 06:53:49.800315 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.43s 2026-02-14 06:53:49.800325 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.17s 2026-02-14 06:53:50.118313 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-14 06:53:50.118410 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-14 06:53:50.129067 | orchestrator | + set -e 2026-02-14 06:53:50.129146 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-14 06:53:50.129173 | orchestrator | ++ export INTERACTIVE=false 2026-02-14 06:53:50.129194 | orchestrator | ++ INTERACTIVE=false 2026-02-14 06:53:50.129214 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-14 06:53:50.129226 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-14 06:53:50.129237 | orchestrator | + source /opt/manager-vars.sh 2026-02-14 06:53:50.129248 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-14 06:53:50.129258 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-14 06:53:50.129636 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-14 06:53:50.129738 | orchestrator | ++ CEPH_VERSION=reef 2026-02-14 06:53:50.129755 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-14 06:53:50.129768 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-14 06:53:50.129780 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-14 06:53:50.129792 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-14 06:53:50.129803 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-14 06:53:50.129814 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-14 06:53:50.129825 | orchestrator | ++ export ARA=false 2026-02-14 06:53:50.129836 | orchestrator | ++ ARA=false 2026-02-14 06:53:50.129848 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-14 06:53:50.129859 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-14 06:53:50.129870 | orchestrator | ++ export TEMPEST=false 2026-02-14 06:53:50.129917 | orchestrator | ++ TEMPEST=false 2026-02-14 06:53:50.129936 | orchestrator | ++ export IS_ZUUL=true 2026-02-14 06:53:50.129954 | orchestrator | ++ IS_ZUUL=true 2026-02-14 06:53:50.129973 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 06:53:50.129991 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.122 2026-02-14 06:53:50.130115 | orchestrator | ++ export EXTERNAL_API=false 2026-02-14 06:53:50.130139 | orchestrator | ++ EXTERNAL_API=false 2026-02-14 06:53:50.130157 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-14 06:53:50.130168 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-14 06:53:50.130179 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-14 06:53:50.130189 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-14 06:53:50.130200 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-14 06:53:50.130214 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-14 06:53:50.130227 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-14 06:53:50.130239 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-14 06:53:50.130251 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-14 06:53:50.130949 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-14 06:53:50.137101 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-14 06:53:50.137171 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-14 06:53:50.137192 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-14 06:53:50.137211 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-14 06:54:12.622849 | orchestrator | 2026-02-14 06:54:12 | ERROR  | Unable to get ansible vault password 2026-02-14 06:54:12.622928 | orchestrator | 2026-02-14 06:54:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-14 06:54:12.622935 | orchestrator | 2026-02-14 06:54:12 | ERROR  | Dropping encrypted entries 2026-02-14 06:54:12.660686 | orchestrator | 2026-02-14 06:54:12 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-14 06:54:12.662343 | orchestrator | 2026-02-14 06:54:12 | INFO  | Kolla configuration check passed 2026-02-14 06:54:12.845566 | orchestrator | 2026-02-14 06:54:12 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-14 06:54:12.863475 | orchestrator | 2026-02-14 06:54:12 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-14 06:54:13.228980 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-14 06:54:34.740277 | orchestrator | 2026-02-14 06:54:34 | ERROR  | Unable to get ansible vault password 2026-02-14 06:54:34.740396 | orchestrator | 2026-02-14 06:54:34 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-14 06:54:34.740413 | orchestrator | 2026-02-14 06:54:34 | ERROR  | Dropping encrypted entries 2026-02-14 06:54:34.782381 | orchestrator | 2026-02-14 06:54:34 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-14 06:54:34.938670 | orchestrator | 2026-02-14 06:54:34 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-02-14 06:54:34.938906 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-14 06:54:34.938925 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-14 06:54:34.938949 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-14 06:54:34.938961 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-14 06:54:34.939717 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican.workers_fanout_4d89c349941b4107bb526cd52a20616c (vhost: /, messages: 0) 2026-02-14 06:54:34.939749 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican.workers_fanout_7f0cab02e615435ab467f6741bd80963 (vhost: /, messages: 0) 2026-02-14 06:54:34.941120 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican.workers_fanout_989caec5bf6d417d979cad9557acea2b (vhost: /, messages: 0) 2026-02-14 06:54:34.941212 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-14 06:54:34.941258 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central (vhost: /, messages: 0) 2026-02-14 06:54:34.941271 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.941282 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.941293 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.941663 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_10517f1e50684a60a797c0e61c81ae84 (vhost: /, messages: 0) 2026-02-14 06:54:34.941713 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_19848d1c4a6944adae49e0571742807c (vhost: /, messages: 0) 2026-02-14 06:54:34.942010 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_34313f2d2d8941bea6869e10c0b464f6 (vhost: /, messages: 0) 2026-02-14 06:54:34.942085 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_3682bb15b9bb4b14b5d4fab332d964d8 (vhost: /, messages: 0) 2026-02-14 06:54:34.942574 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_693d95a1de1f49fead88230e76246398 (vhost: /, messages: 0) 2026-02-14 06:54:34.942965 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - central_fanout_75eef738836e4b9f8ba5dfa9eb3f4a68 (vhost: /, messages: 0) 2026-02-14 06:54:34.942988 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-14 06:54:34.943311 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.943627 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.943657 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.943941 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup_fanout_6501ef88f7964286b2ac2403035ec9d2 (vhost: /, messages: 0) 2026-02-14 06:54:34.944334 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup_fanout_6f8760e4464740ca831e53a7707e8c32 (vhost: /, messages: 0) 2026-02-14 06:54:34.944634 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-backup_fanout_efe443fa4ae4406a82d0a4998882e1a5 (vhost: /, messages: 0) 2026-02-14 06:54:34.944663 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-14 06:54:34.945068 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.945091 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.945291 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.945493 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler_fanout_11cd1501c1a843a1b02c556933ebfc0b (vhost: /, messages: 0) 2026-02-14 06:54:34.945606 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler_fanout_65940a3aaa624c86a8f3e5b0244ba8c2 (vhost: /, messages: 0) 2026-02-14 06:54:34.945869 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-scheduler_fanout_c3cb582446f447dd8b74cf0384d5e259 (vhost: /, messages: 0) 2026-02-14 06:54:34.945965 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-14 06:54:34.946432 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-14 06:54:34.946629 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.946895 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_1964d484fab648ad8362beb3627cadb7 (vhost: /, messages: 0) 2026-02-14 06:54:34.946917 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-14 06:54:34.947031 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.947455 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_6a669fb9fc4543e1bf6aeaf189f0c761 (vhost: /, messages: 0) 2026-02-14 06:54:34.948166 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-14 06:54:34.948225 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.948239 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_72076e5324404c7fb66b029c4b0829cb (vhost: /, messages: 0) 2026-02-14 06:54:34.948662 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume_fanout_2e9e515726ee4deca44224ba7649ca4e (vhost: /, messages: 0) 2026-02-14 06:54:34.948685 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume_fanout_3d8d90bc33804bb3900b627e8f540b7c (vhost: /, messages: 0) 2026-02-14 06:54:34.948697 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - cinder-volume_fanout_68aecf09b6f844469034124a1b9e52a2 (vhost: /, messages: 0) 2026-02-14 06:54:34.948871 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-14 06:54:34.948893 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-14 06:54:34.949181 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-14 06:54:34.949202 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-14 06:54:34.949502 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute_fanout_2efd48a7a3e6438191f834764183bc19 (vhost: /, messages: 0) 2026-02-14 06:54:34.949522 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute_fanout_7f01fca12e85487b8d202a8ed3a72b75 (vhost: /, messages: 0) 2026-02-14 06:54:34.949864 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - compute_fanout_a33f9f94ba7e4a669d741b3100b5a889 (vhost: /, messages: 0) 2026-02-14 06:54:34.949886 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-14 06:54:34.950181 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.950203 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.950425 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.950459 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_019d1851d6314e899981f04666646307 (vhost: /, messages: 0) 2026-02-14 06:54:34.950726 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_100854cc466e4a188e678176d886f9e3 (vhost: /, messages: 0) 2026-02-14 06:54:34.950746 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_3aaef49ad72947438f473f2bdd50a948 (vhost: /, messages: 0) 2026-02-14 06:54:34.951260 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_87417aed04ad4efd8119857ac4c807e5 (vhost: /, messages: 0) 2026-02-14 06:54:34.951283 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_da87544a889146448cb0c139c4d9145b (vhost: /, messages: 0) 2026-02-14 06:54:34.951403 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - conductor_fanout_e26ee666db72446f9f87d9a54c28f437 (vhost: /, messages: 0) 2026-02-14 06:54:34.951650 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - event.sample (vhost: /, messages: 5) 2026-02-14 06:54:34.952066 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-14 06:54:34.952103 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor.hjq3hkhv5iuj (vhost: /, messages: 0) 2026-02-14 06:54:34.952469 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor.k44aqb3ok5b5 (vhost: /, messages: 0) 2026-02-14 06:54:34.952492 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor.wxzrymye6kw5 (vhost: /, messages: 0) 2026-02-14 06:54:34.952765 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_0573436becbe4c9e908582acb08c8d53 (vhost: /, messages: 0) 2026-02-14 06:54:34.952816 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_2a8ac05dec584a4baeed4103aed173c9 (vhost: /, messages: 0) 2026-02-14 06:54:34.953258 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_8dc70805579c462cb355fa9402be4aa3 (vhost: /, messages: 0) 2026-02-14 06:54:34.953278 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_916d0a3a04d3440eb98b00ec0143f0a9 (vhost: /, messages: 0) 2026-02-14 06:54:34.953536 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_af92041797cf4f8e8e10a0f09f61da98 (vhost: /, messages: 0) 2026-02-14 06:54:34.953556 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_c1b8b4344a3b46ab8d3db35845b2a6a3 (vhost: /, messages: 0) 2026-02-14 06:54:34.953567 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_c8096694bd5b4492ae8cc9e8bf111804 (vhost: /, messages: 0) 2026-02-14 06:54:34.953849 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_d32ce6388e5143e28e75e636c246315c (vhost: /, messages: 0) 2026-02-14 06:54:34.953870 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - magnum-conductor_fanout_fbc58e32813e4d9fbfec7fdcb075a43b (vhost: /, messages: 0) 2026-02-14 06:54:34.954461 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-14 06:54:34.954560 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.954583 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.954600 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.954612 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data_fanout_5da356e070f94d408c534aeada1b90dd (vhost: /, messages: 0) 2026-02-14 06:54:34.954906 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data_fanout_679b6a59b4834aaca3a7a7520e5363d0 (vhost: /, messages: 0) 2026-02-14 06:54:34.954928 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-data_fanout_9a002c3277684aa39c9da6df95b2f522 (vhost: /, messages: 0) 2026-02-14 06:54:34.955429 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-14 06:54:34.955700 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.955720 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.955986 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.956025 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler_fanout_444d4e6ea4604698b65a682cb430501a (vhost: /, messages: 0) 2026-02-14 06:54:34.956367 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler_fanout_a67e0bde9d7c498da6a2acb9a688ae9d (vhost: /, messages: 0) 2026-02-14 06:54:34.956390 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-scheduler_fanout_aedc723c6d59480f9984e4edd0da3d9e (vhost: /, messages: 0) 2026-02-14 06:54:34.956618 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-14 06:54:34.956741 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-14 06:54:34.956795 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-14 06:54:34.957124 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-14 06:54:34.957148 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share_fanout_3284274501c547f0958eba5cd54395ea (vhost: /, messages: 0) 2026-02-14 06:54:34.957439 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share_fanout_96ffc467a1dd4e259c085ea36ece81e5 (vhost: /, messages: 0) 2026-02-14 06:54:34.957457 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - manila-share_fanout_c2671a3ad31f4ba8baed842c05187d71 (vhost: /, messages: 0) 2026-02-14 06:54:34.957469 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-14 06:54:34.957901 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-14 06:54:34.957932 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-14 06:54:34.958468 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-14 06:54:34.958508 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-14 06:54:34.958735 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-14 06:54:34.958757 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-14 06:54:34.958814 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-14 06:54:34.958837 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.959069 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.959098 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.959117 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2_fanout_428f0c983d8e493cbd1116e1392a76c9 (vhost: /, messages: 0) 2026-02-14 06:54:34.959527 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2_fanout_63c21d139e9f466eb98fb8df1be9b46e (vhost: /, messages: 0) 2026-02-14 06:54:34.959563 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - octavia_provisioning_v2_fanout_f577614518d64f6c91bb7e555df4103e (vhost: /, messages: 0) 2026-02-14 06:54:34.959582 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-14 06:54:34.959730 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.959758 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.960126 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.960155 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_0fc66316090546fd839cf8d86a74a1da (vhost: /, messages: 0) 2026-02-14 06:54:34.960664 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_1bc96bce139240f0aa535496898c9b18 (vhost: /, messages: 0) 2026-02-14 06:54:34.960706 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_36ec97e861634e18866ffaacaa6d21ff (vhost: /, messages: 0) 2026-02-14 06:54:34.960910 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_6cb824be1aef4190931081d5a6f705c1 (vhost: /, messages: 0) 2026-02-14 06:54:34.960943 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_76303635258e47a5b613f84cff391880 (vhost: /, messages: 0) 2026-02-14 06:54:34.960960 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - producer_fanout_7a731531f3c94a50a433639b932d768c (vhost: /, messages: 0) 2026-02-14 06:54:34.961066 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-14 06:54:34.961098 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.961275 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.961824 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.961855 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_07c6eef72e3c4d83ab671bc6c7518f80 (vhost: /, messages: 0) 2026-02-14 06:54:34.961871 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_19fe54d6e4de47eda909f89667e20cdd (vhost: /, messages: 0) 2026-02-14 06:54:34.961888 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_985bf6c489e14d699b08008283fc6473 (vhost: /, messages: 0) 2026-02-14 06:54:34.962180 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_987891063ee74105919c22c8f45e5c89 (vhost: /, messages: 0) 2026-02-14 06:54:34.962209 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_a475ce59113948acbec2ae6a0c535b78 (vhost: /, messages: 0) 2026-02-14 06:54:34.962512 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_cc160736d00e46ccb3aba60c507a6b28 (vhost: /, messages: 0) 2026-02-14 06:54:34.962541 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_d56c99c1b0434097b3547fba31d7d736 (vhost: /, messages: 0) 2026-02-14 06:54:34.962558 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_f3e12e6f14894f148b9e2c1c7f7cfa8c (vhost: /, messages: 0) 2026-02-14 06:54:34.962800 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-plugin_fanout_fd3da9830c6d4c02be49af78d5e8e9d6 (vhost: /, messages: 0) 2026-02-14 06:54:34.962829 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-14 06:54:34.963143 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.963171 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.963355 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.963374 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_0a1b9f2572e04224916d6a0315471021 (vhost: /, messages: 0) 2026-02-14 06:54:34.963829 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_0ea148e116a54c03907c5c5e027bb603 (vhost: /, messages: 0) 2026-02-14 06:54:34.963876 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_15f5f9b930dd45ffa65c26b82333c287 (vhost: /, messages: 0) 2026-02-14 06:54:34.963894 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_256fcf904fa146fa9cbb66177acbf47f (vhost: /, messages: 0) 2026-02-14 06:54:34.964220 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_5d2956c14ad2498d82a5dac58181b1d2 (vhost: /, messages: 0) 2026-02-14 06:54:34.964320 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_66f7580e98cd46e8893788b64abbed19 (vhost: /, messages: 0) 2026-02-14 06:54:34.964415 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_81e477e1d3324610ad217e0db33477ae (vhost: /, messages: 0) 2026-02-14 06:54:34.964434 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_85f7a4a1706a4ad896e3f31195200773 (vhost: /, messages: 0) 2026-02-14 06:54:34.964719 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_8c48a9cc706e4ed9b9f6eaa59c1115d2 (vhost: /, messages: 0) 2026-02-14 06:54:34.964742 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_971f7b8e67f14688930e76f2b0c53fb1 (vhost: /, messages: 0) 2026-02-14 06:54:34.965063 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_9c1f3e30d9434f8abac690a42dd7d863 (vhost: /, messages: 0) 2026-02-14 06:54:34.965088 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_9ef1288c29f244d68a1d0c476d4848b7 (vhost: /, messages: 0) 2026-02-14 06:54:34.965400 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_bead5a47744f4e6b8a549dcaae1a36b6 (vhost: /, messages: 0) 2026-02-14 06:54:34.965425 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_c578cda9f2cc4de9bbedaa0bd353e4f8 (vhost: /, messages: 0) 2026-02-14 06:54:34.965650 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_cd7c96ca0b8b4668b7bf0b7df02af5bf (vhost: /, messages: 0) 2026-02-14 06:54:34.965671 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_d00cfbce638440b5a2c9ef7bcd7a099c (vhost: /, messages: 0) 2026-02-14 06:54:34.965989 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_e730b8087c1f4482b3c987a848a09537 (vhost: /, messages: 0) 2026-02-14 06:54:34.966015 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-reports-plugin_fanout_f9abd0ae8b0c46daa63ab4ee824d3fee (vhost: /, messages: 0) 2026-02-14 06:54:34.966412 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-14 06:54:34.966436 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.966652 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.966671 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.966956 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_06004f093fef4a04953a9452e8811abe (vhost: /, messages: 0) 2026-02-14 06:54:34.966977 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_15f9e05f683548e2a4b7f2cb7890ee7b (vhost: /, messages: 0) 2026-02-14 06:54:34.967323 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_69621b2a37354982981c83bde39bb5ad (vhost: /, messages: 0) 2026-02-14 06:54:34.967341 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_8075a264f5264ad28f29c688840049bc (vhost: /, messages: 0) 2026-02-14 06:54:34.967686 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_b6067fee4631484b9b5ab877f1ab14ac (vhost: /, messages: 0) 2026-02-14 06:54:34.967705 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_bf227493aef741eb881eeaeb292c1894 (vhost: /, messages: 0) 2026-02-14 06:54:34.967944 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_d3b2be35b51b4e989413417f58b0060b (vhost: /, messages: 0) 2026-02-14 06:54:34.967965 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_da766e165a084c57ba21254e55c60a2b (vhost: /, messages: 0) 2026-02-14 06:54:34.968341 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - q-server-resource-versions_fanout_de3ee7ee50754c0aa83a8c49e368c830 (vhost: /, messages: 0) 2026-02-14 06:54:34.968534 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_04b13b8a86c1412aab0f0d6c130ee458 (vhost: /, messages: 0) 2026-02-14 06:54:34.968647 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_0b303726c3024180adb7bd1d2b310be2 (vhost: /, messages: 0) 2026-02-14 06:54:34.968966 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_1ab03b952fc9405891e9bdbf1bf7e2e8 (vhost: /, messages: 1) 2026-02-14 06:54:34.968987 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_296b9fdd55f44c65945ea0566c8dfa67 (vhost: /, messages: 0) 2026-02-14 06:54:34.969320 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_2c3c98ace30a4d15b6ccbd091eb01da5 (vhost: /, messages: 0) 2026-02-14 06:54:34.969341 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_36fb0a586eac4ae4a67b22c7d149c6b8 (vhost: /, messages: 0) 2026-02-14 06:54:34.969352 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_3961c6f4b098445c997f3738b0e7e3a2 (vhost: /, messages: 0) 2026-02-14 06:54:34.969766 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_41bf11734f4e41da8af96b7ef348140f (vhost: /, messages: 0) 2026-02-14 06:54:34.969801 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_4c1fd2ab845048e7bebe456761e74b7f (vhost: /, messages: 0) 2026-02-14 06:54:34.969815 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_64ee12b21d994ef1a13f030f67a0815f (vhost: /, messages: 0) 2026-02-14 06:54:34.970193 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_70c294b1ba39407b836dfd9c825d17a7 (vhost: /, messages: 0) 2026-02-14 06:54:34.970214 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_82c25927b1bb4cd4a43d0de80f1332df (vhost: /, messages: 0) 2026-02-14 06:54:34.970435 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_920ad5a884bc4f0eaccd10fd76efde53 (vhost: /, messages: 1) 2026-02-14 06:54:34.970461 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_a8a4565a4d864b69b55c381bfaf981ae (vhost: /, messages: 0) 2026-02-14 06:54:34.970833 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_abeb9c2d68e54eb08ab0ae10253f3274 (vhost: /, messages: 0) 2026-02-14 06:54:34.970856 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_ac77da83683a455dad3587af647897f2 (vhost: /, messages: 0) 2026-02-14 06:54:34.971107 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_c0039b98c47a4c40a3817c6665d26d3c (vhost: /, messages: 0) 2026-02-14 06:54:34.971127 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_c860e1aa86cc47ec81709939951a1fd7 (vhost: /, messages: 0) 2026-02-14 06:54:34.971471 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - reply_ffb50f0b2ff6446f83e7748ef58b7d48 (vhost: /, messages: 0) 2026-02-14 06:54:34.971491 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-14 06:54:34.971888 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.971924 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.972096 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.972114 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_1b31eb7cb9f04d67b512efcf882d7659 (vhost: /, messages: 0) 2026-02-14 06:54:34.972501 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_1b6cd97f0025404aa5f122dc7ab8226d (vhost: /, messages: 0) 2026-02-14 06:54:34.972572 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_3a4c0b2319c840f1a5ba194b05d7337c (vhost: /, messages: 0) 2026-02-14 06:54:34.972592 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_4b169a5d199a40c59089d320cb9cdfe1 (vhost: /, messages: 0) 2026-02-14 06:54:34.972916 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_9818adad376945cc8482bcfd8d171b41 (vhost: /, messages: 0) 2026-02-14 06:54:34.972937 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - scheduler_fanout_a9589c1ac66d48ec9df2391acb8d1056 (vhost: /, messages: 0) 2026-02-14 06:54:34.973199 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-14 06:54:34.973219 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-14 06:54:34.973344 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-14 06:54:34.973465 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-14 06:54:34.973868 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_269ad6a12c834010a7024d6ff545db66 (vhost: /, messages: 0) 2026-02-14 06:54:34.973892 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_5527cbc6a10c422aa136067f8670fdeb (vhost: /, messages: 0) 2026-02-14 06:54:34.974170 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_6122670a44894f96864f5ad7128a69dd (vhost: /, messages: 0) 2026-02-14 06:54:34.974192 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_a567bc80302a4e4096ecdb22e7058245 (vhost: /, messages: 0) 2026-02-14 06:54:34.974301 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_ad0fdf6e66f04caea7c9d687bfea1456 (vhost: /, messages: 0) 2026-02-14 06:54:34.974389 | orchestrator | 2026-02-14 06:54:34 | INFO  |  - worker_fanout_e7793848cdd04978858db379b0a88cc6 (vhost: /, messages: 0) 2026-02-14 06:54:35.309542 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-14 06:54:37.364702 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-14 06:54:37.364853 | orchestrator | [--no-close-connections] [--quorum] 2026-02-14 06:54:37.364872 | orchestrator | [--vhost VHOST] 2026-02-14 06:54:37.364885 | orchestrator | [{list,delete,prepare,check}] 2026-02-14 06:54:37.364897 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-14 06:54:37.364910 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-14 06:54:38.112297 | orchestrator | ERROR 2026-02-14 06:54:38.112517 | orchestrator | { 2026-02-14 06:54:38.112553 | orchestrator | "delta": "2:04:45.676328", 2026-02-14 06:54:38.112577 | orchestrator | "end": "2026-02-14 06:54:37.676581", 2026-02-14 06:54:38.112598 | orchestrator | "msg": "non-zero return code", 2026-02-14 06:54:38.112617 | orchestrator | "rc": 2, 2026-02-14 06:54:38.112635 | orchestrator | "start": "2026-02-14 04:49:52.000253" 2026-02-14 06:54:38.112653 | orchestrator | } failure 2026-02-14 06:54:38.414580 | 2026-02-14 06:54:38.414820 | PLAY RECAP 2026-02-14 06:54:38.414973 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-14 06:54:38.415005 | 2026-02-14 06:54:38.653115 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-14 06:54:38.655383 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-14 06:54:39.435146 | 2026-02-14 06:54:39.435321 | PLAY [Post output play] 2026-02-14 06:54:39.453575 | 2026-02-14 06:54:39.453722 | LOOP [stage-output : Register sources] 2026-02-14 06:54:39.525796 | 2026-02-14 06:54:39.526119 | TASK [stage-output : Check sudo] 2026-02-14 06:54:40.467933 | orchestrator | sudo: a password is required 2026-02-14 06:54:40.564704 | orchestrator | ok: Runtime: 0:00:00.013771 2026-02-14 06:54:40.580307 | 2026-02-14 06:54:40.580475 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-14 06:54:40.621770 | 2026-02-14 06:54:40.622111 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-14 06:54:40.701256 | orchestrator | ok 2026-02-14 06:54:40.710522 | 2026-02-14 06:54:40.710655 | LOOP [stage-output : Ensure target folders exist] 2026-02-14 06:54:41.171148 | orchestrator | ok: "docs" 2026-02-14 06:54:41.171499 | 2026-02-14 06:54:41.423382 | orchestrator | ok: "artifacts" 2026-02-14 06:54:41.690653 | orchestrator | ok: "logs" 2026-02-14 06:54:41.709449 | 2026-02-14 06:54:41.709616 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-14 06:54:41.745607 | 2026-02-14 06:54:41.745954 | TASK [stage-output : Make all log files readable] 2026-02-14 06:54:42.044143 | orchestrator | ok 2026-02-14 06:54:42.054302 | 2026-02-14 06:54:42.054449 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-14 06:54:42.089007 | orchestrator | skipping: Conditional result was False 2026-02-14 06:54:42.103959 | 2026-02-14 06:54:42.104128 | TASK [stage-output : Discover log files for compression] 2026-02-14 06:54:42.129057 | orchestrator | skipping: Conditional result was False 2026-02-14 06:54:42.142792 | 2026-02-14 06:54:42.142995 | LOOP [stage-output : Archive everything from logs] 2026-02-14 06:54:42.183524 | 2026-02-14 06:54:42.183680 | PLAY [Post cleanup play] 2026-02-14 06:54:42.192102 | 2026-02-14 06:54:42.192211 | TASK [Set cloud fact (Zuul deployment)] 2026-02-14 06:54:42.247026 | orchestrator | ok 2026-02-14 06:54:42.260173 | 2026-02-14 06:54:42.260294 | TASK [Set cloud fact (local deployment)] 2026-02-14 06:54:42.283942 | orchestrator | skipping: Conditional result was False 2026-02-14 06:54:42.298107 | 2026-02-14 06:54:42.298237 | TASK [Clean the cloud environment] 2026-02-14 06:54:42.902991 | orchestrator | 2026-02-14 06:54:42 - clean up servers 2026-02-14 06:54:43.674152 | orchestrator | 2026-02-14 06:54:43 - testbed-manager 2026-02-14 06:54:43.763987 | orchestrator | 2026-02-14 06:54:43 - testbed-node-4 2026-02-14 06:54:43.855802 | orchestrator | 2026-02-14 06:54:43 - testbed-node-5 2026-02-14 06:54:43.944312 | orchestrator | 2026-02-14 06:54:43 - testbed-node-3 2026-02-14 06:54:44.034529 | orchestrator | 2026-02-14 06:54:44 - testbed-node-0 2026-02-14 06:54:44.125215 | orchestrator | 2026-02-14 06:54:44 - testbed-node-2 2026-02-14 06:54:44.213727 | orchestrator | 2026-02-14 06:54:44 - testbed-node-1 2026-02-14 06:54:44.298708 | orchestrator | 2026-02-14 06:54:44 - clean up keypairs 2026-02-14 06:54:44.317625 | orchestrator | 2026-02-14 06:54:44 - testbed 2026-02-14 06:54:44.345248 | orchestrator | 2026-02-14 06:54:44 - wait for servers to be gone 2026-02-14 06:54:55.288179 | orchestrator | 2026-02-14 06:54:55 - clean up ports 2026-02-14 06:54:55.493844 | orchestrator | 2026-02-14 06:54:55 - 1733135e-dad3-458c-9bb0-c761a1fa7697 2026-02-14 06:54:55.737762 | orchestrator | 2026-02-14 06:54:55 - 209939a9-c650-4be7-8a42-ef588f58e85d 2026-02-14 06:54:56.022630 | orchestrator | 2026-02-14 06:54:56 - 6a168c72-6c6d-4abb-bd7e-4ce0dd84807d 2026-02-14 06:54:56.704396 | orchestrator | 2026-02-14 06:54:56 - a2b893eb-f16a-46e7-ba87-c8468aed1aa0 2026-02-14 06:54:57.114609 | orchestrator | 2026-02-14 06:54:57 - a798ecbe-a809-42f3-b1aa-de17ef27652a 2026-02-14 06:54:57.312323 | orchestrator | 2026-02-14 06:54:57 - b9834feb-ba2b-4b22-a448-cb350d7dc40a 2026-02-14 06:54:57.516082 | orchestrator | 2026-02-14 06:54:57 - d87f97b4-2ee9-4e64-ae2d-fb0f7b8c47d4 2026-02-14 06:54:57.719701 | orchestrator | 2026-02-14 06:54:57 - clean up volumes 2026-02-14 06:54:57.843164 | orchestrator | 2026-02-14 06:54:57 - testbed-volume-manager-base 2026-02-14 06:54:57.880769 | orchestrator | 2026-02-14 06:54:57 - testbed-volume-5-node-base 2026-02-14 06:54:57.922409 | orchestrator | 2026-02-14 06:54:57 - testbed-volume-4-node-base 2026-02-14 06:54:57.967091 | orchestrator | 2026-02-14 06:54:57 - testbed-volume-2-node-base 2026-02-14 06:54:58.006390 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-1-node-base 2026-02-14 06:54:58.052130 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-3-node-base 2026-02-14 06:54:58.099695 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-0-node-base 2026-02-14 06:54:58.145970 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-4-node-4 2026-02-14 06:54:58.190766 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-1-node-4 2026-02-14 06:54:58.234709 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-8-node-5 2026-02-14 06:54:58.277335 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-6-node-3 2026-02-14 06:54:58.318403 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-3-node-3 2026-02-14 06:54:58.362477 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-7-node-4 2026-02-14 06:54:58.403407 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-5-node-5 2026-02-14 06:54:58.442983 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-2-node-5 2026-02-14 06:54:58.484958 | orchestrator | 2026-02-14 06:54:58 - testbed-volume-0-node-3 2026-02-14 06:54:58.526962 | orchestrator | 2026-02-14 06:54:58 - disconnect routers 2026-02-14 06:54:58.656838 | orchestrator | 2026-02-14 06:54:58 - testbed 2026-02-14 06:54:59.642806 | orchestrator | 2026-02-14 06:54:59 - clean up subnets 2026-02-14 06:54:59.680801 | orchestrator | 2026-02-14 06:54:59 - subnet-testbed-management 2026-02-14 06:54:59.823213 | orchestrator | 2026-02-14 06:54:59 - clean up networks 2026-02-14 06:55:00.013762 | orchestrator | 2026-02-14 06:55:00 - net-testbed-management 2026-02-14 06:55:00.296670 | orchestrator | 2026-02-14 06:55:00 - clean up security groups 2026-02-14 06:55:00.340109 | orchestrator | 2026-02-14 06:55:00 - testbed-management 2026-02-14 06:55:00.469361 | orchestrator | 2026-02-14 06:55:00 - testbed-node 2026-02-14 06:55:00.568615 | orchestrator | 2026-02-14 06:55:00 - clean up floating ips 2026-02-14 06:55:00.605470 | orchestrator | 2026-02-14 06:55:00 - 81.163.193.122 2026-02-14 06:55:01.000345 | orchestrator | 2026-02-14 06:55:01 - clean up routers 2026-02-14 06:55:01.117590 | orchestrator | 2026-02-14 06:55:01 - testbed 2026-02-14 06:55:02.856033 | orchestrator | ok: Runtime: 0:00:19.823188 2026-02-14 06:55:02.860730 | 2026-02-14 06:55:02.860938 | PLAY RECAP 2026-02-14 06:55:02.861082 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-14 06:55:02.861154 | 2026-02-14 06:55:02.998938 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-14 06:55:03.001656 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-14 06:55:03.751075 | 2026-02-14 06:55:03.751247 | PLAY [Cleanup play] 2026-02-14 06:55:03.767393 | 2026-02-14 06:55:03.767535 | TASK [Set cloud fact (Zuul deployment)] 2026-02-14 06:55:03.823171 | orchestrator | ok 2026-02-14 06:55:03.832025 | 2026-02-14 06:55:03.832181 | TASK [Set cloud fact (local deployment)] 2026-02-14 06:55:03.866546 | orchestrator | skipping: Conditional result was False 2026-02-14 06:55:03.886054 | 2026-02-14 06:55:03.886237 | TASK [Clean the cloud environment] 2026-02-14 06:55:05.026487 | orchestrator | 2026-02-14 06:55:05 - clean up servers 2026-02-14 06:55:05.562194 | orchestrator | 2026-02-14 06:55:05 - clean up keypairs 2026-02-14 06:55:05.577504 | orchestrator | 2026-02-14 06:55:05 - wait for servers to be gone 2026-02-14 06:55:05.627666 | orchestrator | 2026-02-14 06:55:05 - clean up ports 2026-02-14 06:55:05.701758 | orchestrator | 2026-02-14 06:55:05 - clean up volumes 2026-02-14 06:55:05.765241 | orchestrator | 2026-02-14 06:55:05 - disconnect routers 2026-02-14 06:55:05.797694 | orchestrator | 2026-02-14 06:55:05 - clean up subnets 2026-02-14 06:55:05.817474 | orchestrator | 2026-02-14 06:55:05 - clean up networks 2026-02-14 06:55:05.951457 | orchestrator | 2026-02-14 06:55:05 - clean up security groups 2026-02-14 06:55:05.988773 | orchestrator | 2026-02-14 06:55:05 - clean up floating ips 2026-02-14 06:55:06.023686 | orchestrator | 2026-02-14 06:55:06 - clean up routers 2026-02-14 06:55:06.426807 | orchestrator | ok: Runtime: 0:00:01.393325 2026-02-14 06:55:06.430980 | 2026-02-14 06:55:06.431178 | PLAY RECAP 2026-02-14 06:55:06.431303 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-14 06:55:06.431364 | 2026-02-14 06:55:06.563354 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-14 06:55:06.565774 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-14 06:55:07.318176 | 2026-02-14 06:55:07.318335 | PLAY [Base post-fetch] 2026-02-14 06:55:07.333645 | 2026-02-14 06:55:07.333784 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-14 06:55:07.399804 | orchestrator | skipping: Conditional result was False 2026-02-14 06:55:07.415042 | 2026-02-14 06:55:07.415273 | TASK [fetch-output : Set log path for single node] 2026-02-14 06:55:07.473487 | orchestrator | ok 2026-02-14 06:55:07.481999 | 2026-02-14 06:55:07.482145 | LOOP [fetch-output : Ensure local output dirs] 2026-02-14 06:55:07.955398 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/logs" 2026-02-14 06:55:08.238263 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/artifacts" 2026-02-14 06:55:08.504388 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1a58906f9cdd43b884cb44b9013c953c/work/docs" 2026-02-14 06:55:08.524943 | 2026-02-14 06:55:08.525111 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-14 06:55:09.442862 | orchestrator | changed: .d..t...... ./ 2026-02-14 06:55:09.443194 | orchestrator | changed: All items complete 2026-02-14 06:55:09.443247 | 2026-02-14 06:55:10.153680 | orchestrator | changed: .d..t...... ./ 2026-02-14 06:55:10.896052 | orchestrator | changed: .d..t...... ./ 2026-02-14 06:55:10.925111 | 2026-02-14 06:55:10.925247 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-14 06:55:10.961966 | orchestrator | skipping: Conditional result was False 2026-02-14 06:55:10.965600 | orchestrator | skipping: Conditional result was False 2026-02-14 06:55:10.983063 | 2026-02-14 06:55:10.983180 | PLAY RECAP 2026-02-14 06:55:10.983261 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-14 06:55:10.983304 | 2026-02-14 06:55:11.109718 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-14 06:55:11.112146 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-14 06:55:11.869524 | 2026-02-14 06:55:11.869695 | PLAY [Base post] 2026-02-14 06:55:11.884748 | 2026-02-14 06:55:11.884919 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-14 06:55:12.826574 | orchestrator | changed 2026-02-14 06:55:12.836902 | 2026-02-14 06:55:12.837030 | PLAY RECAP 2026-02-14 06:55:12.837105 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-14 06:55:12.837180 | 2026-02-14 06:55:12.961375 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-14 06:55:12.963832 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-14 06:55:13.801265 | 2026-02-14 06:55:13.801458 | PLAY [Base post-logs] 2026-02-14 06:55:13.814168 | 2026-02-14 06:55:13.814305 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-14 06:55:14.310177 | localhost | changed 2026-02-14 06:55:14.328166 | 2026-02-14 06:55:14.328344 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-14 06:55:14.368916 | localhost | ok 2026-02-14 06:55:14.375364 | 2026-02-14 06:55:14.375493 | TASK [Set zuul-log-path fact] 2026-02-14 06:55:14.402483 | localhost | ok 2026-02-14 06:55:14.415058 | 2026-02-14 06:55:14.415202 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-14 06:55:14.453123 | localhost | ok 2026-02-14 06:55:14.461472 | 2026-02-14 06:55:14.461674 | TASK [upload-logs : Create log directories] 2026-02-14 06:55:14.967247 | localhost | changed 2026-02-14 06:55:14.970118 | 2026-02-14 06:55:14.970223 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-14 06:55:15.471753 | localhost -> localhost | ok: Runtime: 0:00:00.007049 2026-02-14 06:55:15.475992 | 2026-02-14 06:55:15.476105 | TASK [upload-logs : Upload logs to log server] 2026-02-14 06:55:16.028785 | localhost | Output suppressed because no_log was given 2026-02-14 06:55:16.030727 | 2026-02-14 06:55:16.030825 | LOOP [upload-logs : Compress console log and json output] 2026-02-14 06:55:16.091665 | localhost | skipping: Conditional result was False 2026-02-14 06:55:16.097691 | localhost | skipping: Conditional result was False 2026-02-14 06:55:16.112247 | 2026-02-14 06:55:16.112472 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-14 06:55:16.160988 | localhost | skipping: Conditional result was False 2026-02-14 06:55:16.161636 | 2026-02-14 06:55:16.165088 | localhost | skipping: Conditional result was False 2026-02-14 06:55:16.174742 | 2026-02-14 06:55:16.175020 | LOOP [upload-logs : Upload console log and json output]